Preferences

Historically, this kind of test optimization was done either with static analysis to understand dependency graphs and/or runtime data collected from executing the app.

However, those methods are tightly bound to programming languages, frameworks, and interpreters so they are difficult to support across technology stacks.

This approach substitutes the intelligence of the LLM to make educated guesses about what tests execute, to achieve the same goal of executing all of the tests that could fail and none of the rest (balancing a precision/recall tradeoff). What’s especially interesting about this to me is that the same technique could be applied to any language or stack with minimal modification.

Has anyone seen LLMs in other contexts being substituted for traditional analysis to achieve language agnostic results?


This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal