It's no mystery that being a solo developer on a project can be daunting. One of the biggest drawbacks is the benefit that derived from the insight offered by other developers. I have friends that will humor me and let me do walk-throughs and such from time to time but it's not as consistent as having a dedicated team that's solely focused on improving your codebase.
AI to the rescue, thanks to the OpenAI API I've been able to implement a github action that acts as a team of developers ready to provide feedback on every submission! There are several solutions that already exists for this but I found them rather lacking since so much is dependent on the prompt used, and we all know that the same message can generate different responses. I'm currently stuck only using gpt3.5-turbo since I don't have access to gpt4 yet but I suspect that this will only get better once I'm able to integrate that.
My action looks something like this. If you're interested in getting a copy of the code just contact me via the contact form available on the site!
The actions.yml file:
name: 'ai-code-buddies'
description: 'perform code reviews using the openai gpt3.5-turbo and custom prompts'
inputs:
openai_api_key:
description: 'openai api key'
required: true
source_file_extensions:
description: 'source files to review -> .h, .c, .cpp'
default: ".h,.cpp,.c"
required: false
exclude_paths:
description: 'exclusion paths'
default: "_documentation/,_idea_templates/"
required: false
github_token:
description: 'defaults to {{ github.token }}, this is the default github token available to actions only replace if you need custom permissions.'
default: '${{ github.token }}'
required: false
prompts:
description: 'the prompts to use when submitting code to the api. 1 entry here is 1 submission to the api.'
default: >-
You a Senior C++ developer responsible for code-reviews, check the following code for X, Y, Z.";
required: false
runs:
using: 'node16'
main: 'index.js'
The key here to me, is the prompts section. I currently have about 10 prompts that specifically ask the AI to take on the role of a C++ game engine developer, or SDL2 expert, etc... I also request that it look for optimization issues, architectural problems, etc...
The ability to tailor these prompts and have the evaluations performed atomically is amazing.
The core functionality in index.js looks something like this:
const main = async () =>
{
// Get the input prompts
const promptsInput = github_actions_core.getInput('prompts', {required: true});
// Split the input string into an array of prompts
const prompts = promptsInput.split(';').map(prompt => prompt.trim());
const delay = 5000; // Set the desired delay in milliseconds between calls
for(const prompt of prompts)
{
await perform_review(prompt);
await sleep(delay);
}
};
main().then(r => console.log("done."));
In the perform_review function I attempt to submit the entire source file, and fall back to diffs if the token count is exceeded. I needed to use the following to achieve that.
const { Tiktoken } = require("@dqbd/tiktoken/lite");
const { load } = require("@dqbd/tiktoken/load");
const registry = require("@dqbd/tiktoken/registry.json");
const models = require("@dqbd/tiktoken/model_to_encoding.json");
Ultimately, it all comes down this function:
async function gpt35_turbo_with_retries(message, maxRetries = 3, delay = 1000)
{
let retries = 0;
let response;
const openai_api_key = github_actions_core.getInput('openai_api_key', {required: true});
const openai_client = new openai_api.OpenAIApi(new openai_api.Configuration({apiKey: openai_api_key}));
while(retries <= maxRetries)
{
try
{
response = await openai_client.createChatCompletion({
model: 'gpt-3.5-turbo', messages: message,
});
// If successful, return the content immediately
return response.data.choices[0].message.content;
} catch(error)
{
console.error(`Attempt ${retries + 1} failed: ${error.message}`);
retries++;
// If we've reached the maximum number of retries, throw an error
if(retries > maxRetries)
{
throw new Error(`Failed after ${maxRetries} attempts: ${error.message}`);
}
// Wait for the specified delay before trying again
await sleep(delay);
}
}
}
Here is an example of the output:
![](https://static.wixstatic.com/media/3eda16_5438797907bb4b94b09245eb0f25d817~mv2.png/v1/fill/w_980,h_766,al_c,q_90,usm_0.66_1.00_0.01,enc_auto/3eda16_5438797907bb4b94b09245eb0f25d817~mv2.png)
![](https://static.wixstatic.com/media/3eda16_54ca34b4be714ab7830aa62351d42ef8~mv2.png/v1/fill/w_980,h_1194,al_c,q_90,usm_0.66_1.00_0.01,enc_auto/3eda16_54ca34b4be714ab7830aa62351d42ef8~mv2.png)
Thanks to the power of OpenAI, I now have a team of ai bots ready to critique every line and provide useful, actionable feedback!
Comments