Show HN: Semcheck – AI Tool for checking implementation follows spec
github.comHi HN, like many I've been interested in the direction software engineering is taking now that coding LLMs are becoming prevalent. It seems that we're not quite there for "natural language programming", but it seems new abstractions are already starting to form. In order to explore this further I've built semcheck (semantic checker). It's a simple cli tool that can be used in CI or pre-commit to check that your implementation matches your specification using LLMs.
The inspiration came while I was working on another project where I needed a data structure for a GeoJSON object, I passed Claude the text of RFC-7946 and it gave me an implementation. It took some back and forth after that before I was happy with it, but this also meant the RFC went out of context for the LLM. That's why I asked Claude again to check the RFC to make sure we haven't strayed too far from the spec. It occurred to me that it would be good to have a formal way of defining these kinds of checks that can be run in a pre-commit or merge request flow.
Creating this tool was itself an experiment to try "spec-driven-development" using Claude Code, a middle ground between completely vibe-coding and traditional programming. My workflow was as follows: ask AI to write a spec and implementation plan, edit these manually to my liking, then ask AI to execute one step at a time. Being careful that the AI doesn't drift too far from what I think is required. My very first commit [1] is the specification of the config file structure and an implementation plan.
As soon as semcheck was in a state where it could check itself it started to find issues [2]. I found that this workflow improves not just your implementation but helps you refine your specification at the same time.
Besides specification, I also started to include documentation in my rules, making sure that the configuration examples and CLI flags I have in my README.md file stay in line with implementation [3].
The best thing is that you can put found issues directly back into your AI editor for a quick iteration cycle.
Some learnings:
- LLMs are very good at finding discrepancies, as long as the number of files you pass to the comparison function isn't too large, in other words the true-positive results are quite good.
- False-positives: the LLM is a know-it-all (literally) and often thinks it knows better. The LLM is eager to use its own world knowledge to find faults. This can both be nice and problematic. I've often had it complain that my Go version doesn't exist, but it was simply released after the knowledge cutoff of that model. I specifically prompt [4] the model to only find discrepancies, but it often "chooses" to use its knowledge anyway.
- In an effort to reduce false-positives I ask the model to give me a confidence score (0-1), to indicate to me how sure it was that the issue it found is actually applicable in this scenario. The models are always super confident and output values > 0.7 almost exclusively.
- One thing that did reduced false-positives significantly is asking the model to give its reasoning before assigning a severity level to an issue found.
- In my (rudimentary) experiments I found that "thinking" models like O3 don't improve on performance much and are not worth the additional tokens/time. (likely because I already ask for the reasoning anyway)
- The models that perform best are Claude 4 and GPT-4.1
Let me know if you could see this be useful in your workflow, and what feature you would need to make it functional.
[1]: https://github.com/rejot-dev/semcheck/commit/ce0af27ca0077fe...
[2]: https://github.com/rejot-dev/semcheck/commit/2f96fc428b551d9...
[3]: https://github.com/rejot-dev/semcheck/blob/47f7aaf98811c54e2...
[4]: https://github.com/rejot-dev/semcheck/blob/fec2df48304d9eff9...