This is a little toy prototype of a tool that attempts to summarize a whole binary using GPT-3 (specifically the text-davinci-003 model), based on decompiled code provided by Ghidra. However, today’s language models can only fit a small amount of text into their context window at once (4096 tokens for text-davinci-003, a couple hundred lines of code at most) – most programs (and even some functions) are too big to fit all at once.
GPT-WPRE attempts to work around this by recursively creating natural language summaries of a function’s dependencies and then providing those as context for the function itself. It’s pretty neat when it works! I have tested it on exactly one program, so YMMV.
You must log in or register to comment.