A web application that conducts AI-powered interviews to assess whether submitted code was written by a human or AI. Uses the Ribbon API to create personalized code review interviews.
- Code Submission: Students can paste their code and provide basic information
- Automated Interview Generation: Creates personalized interview questions based on the submitted code
- AI-Powered Interviews: Uses Ribbon's voice agent to conduct the interview
- AI Detection Analysis: Analyzes interview responses to determine likelihood of AI-generated code
- Real-time Status Updates: Tracks interview progress and provides real-time feedback
- Detailed Results: Provides scoring, confidence levels, and reasoning for the assessment
- Node.js (version 14 or higher)
- A Ribbon API account and API key
- Sign up at Ribbon AI
- Navigate to your API settings
- Generate an API key
- Copy the API key for use in step 4
cd "/Users/ciqbian/Desktop/VS/HT6 25/Code Buddy 2"
npm install- Open the
.envfile - Replace
your_ribbon_api_key_herewith your actual Ribbon API key:
RIBBON_API_KEY=your_actual_api_key_here
PORT=3000For development (with auto-restart):
npm run devFor production:
npm startOpen your browser and navigate to:
http://localhost:3000
- Students enter their name, email, programming language, and paste their code
- The system generates appropriate interview questions based on the code and language
- Creates a Ribbon interview flow with customized questions
- Generates a unique interview link for the student
- Questions focus on code understanding, implementation details, and problem-solving process
- Students click the provided link to start the AI-powered voice interview
- The Ribbon AI agent asks questions about the code
- Students explain their code, discuss their approach, and demonstrate understanding
- Once completed, the system analyzes the interview transcript
- Uses multiple factors to determine AI likelihood:
- Positive indicators (human-written): Personal authorship claims, mentions of debugging, technical terminology usage
- Negative indicators (AI-generated): Mentions of AI tools, uncertainty about code details, very brief responses
- Provides a score from 0-100 and confidence level
- Displays detailed analysis including:
- AI detection score and confidence level
- Assessment of whether code is likely human or AI-generated
- Reasoning behind the score
- Full interview transcript
Submit code for interview creation.
Request Body:
{
"code": "string",
"language": "string",
"studentName": "string",
"studentEmail": "string"
}Response:
{
"success": true,
"sessionId": "string",
"interviewLink": "string",
"interviewId": "string"
}Check interview completion status and get results.
Response:
{
"status": "completed",
"analysis": {
"score": 75,
"confidence": "high",
"aiLikelihood": "likely human-written",
"reasoning": "Claims personal authorship (+15); Uses technical terminology (4 matches) (+10)"
},
"transcript": "string",
"studentInfo": {
"name": "string",
"language": "string"
}
}The AI detection algorithm considers multiple factors:
- Personal Authorship (+15): Uses phrases like "I wrote", "I coded", "I implemented"
- Challenges Mentioned (+10): Discusses struggles, difficulties, or challenges
- Debugging Process (+12): Mentions debugging, fixing bugs, or errors
- Technical Terminology (+10): Uses relevant programming concepts and terminology
- AI Tool Mentions (-25): References ChatGPT, AI, or generation tools
- Copy/Paste References (-15): Mentions copying or pasting code
- Uncertainty (-12): Shows lack of understanding with "don't know" or "can't explain"
- Brief Responses (-10): Very short or generic responses
- 70-100: Likely human-written (high confidence if 80+)
- 50-69: Possibly human-written (medium confidence)
- 30-49: Possibly AI-generated (medium confidence)
- 0-29: Likely AI-generated (high confidence if <20)
- JavaScript
- Python
- Java
- C++
- C
- C#
- Go
- Rust
- PHP
- Ruby
- Swift
- Kotlin
- Other (custom questions)
Edit the generateCodeQuestions function in server.js to add language-specific questions.
Update the analyzeForAIDetection function in server.js to adjust scoring factors and weights.
Customize the base questions in the generateCodeQuestions function to focus on different aspects of code understanding.
- "Missing required fields" error: Ensure all form fields are filled out
- "Failed to create interview" error: Check your Ribbon API key is correct
- Interview not starting: Verify the Ribbon API key has proper permissions
- Results not loading: Check browser console for errors and verify API connectivity
Be aware of Ribbon API rate limits. The application includes basic error handling, but you may need to implement additional retry logic for high-volume usage.
- Store API keys securely (never commit to version control)
- Implement input validation and sanitization
- Consider adding authentication for production use
- Monitor API usage and costs
This project is for educational purposes. Please ensure compliance with Ribbon AI's terms of service.