PraisonAI icon indicating copy to clipboard operation
PraisonAI copied to clipboard

Add tinyllama model agent

Open Dhivya-Bharathy opened this issue 7 months ago • 3 comments

User description

This agent uses the TinyLlama-1.1B model to generate code or responses based on user input prompts. It demonstrates a minimal setup for building an AI assistant using Hugging Face Transformers with a lightweight language model.


PR Type

Documentation


Description

  • Added five new Jupyter notebooks in the examples/cookbooks directory, each demonstrating practical AI agent use cases with detailed instructions and code examples.

  • Introduced a notebook for using the TinyLlama-1.1B model as a simple AI agent, including setup, response generation, and usage demonstration.

  • Added a comprehensive code analysis agent notebook, featuring structured reporting with Pydantic schemas and example analysis workflows.

  • Provided a predictive maintenance workflow notebook showcasing multi-agent orchestration for sensor data analysis, anomaly detection, and maintenance scheduling.

  • Included a beginner-friendly notebook for the Qwen2.5-0.5B-Instruct model, guiding users through chat-based generation tasks.

  • Added a Gemma 2B instruction agent notebook, covering model setup, prompt configuration, inference, and model saving for instruction following and code generation.


Changes walkthrough 📝

Relevant files
Documentation
TinyLlama_1_1B_model_SimpleAIAgent.ipynb
Add TinyLlama-1.1B model agent demo notebook with usage example

examples/cookbooks/TinyLlama_1_1B_model_SimpleAIAgent.ipynb

  • Added a new Jupyter notebook demonstrating how to use the
    TinyLlama-1.1B model as a simple AI agent.
  • Includes installation instructions for required packages
    (transformers, accelerate, torch).
  • Shows how to load the TinyLlama model and tokenizer from Hugging Face.
  • Defines a Python function (generate_response) for generating responses
    from the model.
  • Provides an example prompt and prints the model's output.
  • Contains markdown explanations and a Colab badge for easy access.
  • +2762/-0
    Code_Analysis_Agent.ipynb
    Add notebook for AI-powered code analysis agent with structured
    reporting

    examples/cookbooks/Code_Analysis_Agent.ipynb

  • Added a new Jupyter notebook demonstrating how to build an AI agent
    for code analysis and quality assessment.
  • Introduced dependencies installation, API key setup, and code
    ingestion using gitingest.
  • Defined a comprehensive Pydantic schema (CodeAnalysisReport) for
    structured code analysis output.
  • Provided example agent and task setup, a main function to run analysis
    on a codebase, and sample output display.
  • Included example usage with a mock analysis result and agent info
    display.
  • +459/-0 
    Predictive_Maintenance_Multi_Agent_Workflow.ipynb
    Add predictive maintenance workflow notebook with multi-agent
    orchestration

    examples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb

  • Added a new Jupyter notebook demonstrating a predictive maintenance
    workflow using multiple AI agents.
  • Included helper functions for simulating sensor data, performance
    analysis, anomaly detection, failure prediction, and maintenance
    scheduling.
  • Defined agents and tasks for a multi-step workflow using the
    praisonaiagents library.
  • Provided example workflow execution and sample output for maintenance
    planning.
  • +401/-0 
    Qwen2_5_InstructionAgent.ipynb
    Add Qwen2.5 instruction agent notebook for simple chat generation

    examples/cookbooks/Qwen2_5_InstructionAgent.ipynb

  • Added a beginner-friendly notebook for using the Qwen2.5-0.5B-Instruct
    model for chat-based generation.
  • Included installation steps, Hugging Face authentication, model
    loading, and prompt preparation.
  • Demonstrated how to generate a response using a chat template and
    print the output.
  • Provided clear sectioning and example output for user guidance.
  • +420/-0 
    Gemma2B_Instruction_Agent.ipynb
    Add Gemma 2B instruction agent notebook with data prep and inference

    examples/cookbooks/Gemma2B_Instruction_Agent.ipynb

  • Added a notebook for using Google's gemma-2b-it model for instruction
    following and code generation.
  • Included dependency installation, model/tokenizer setup, and Hugging
    Face authentication.
  • Demonstrated prompt configuration, dataset tokenization, inference,
    and model saving.
  • Provided example outputs and step-by-step explanations for users.
  • +605/-0 

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Summary by CodeRabbit

    • New Features
      • Added a "Code Analysis Agent" example notebook demonstrating AI-driven code quality assessment, including detailed metrics and recommendations.
      • Introduced a "Gemma 2B Instruction Agent" example notebook showing how to use and fine-tune the Gemma 2B language model for instruction-based tasks.
      • Added a "Predictive Maintenance Multi-Agent Workflow" example notebook showcasing a multi-agent system for predictive maintenance using simulated sensor data and AI agents.
      • Introduced a "Qwen2.5 Instruction Agent" example notebook providing a step-by-step guide for chat-based interactions with the Qwen2.5-0.5B-Instruct model.

    Dhivya-Bharathy avatar Jun 05 '25 13:06 Dhivya-Bharathy

    Walkthrough

    One notebook's "Open in Colab" badge URL is corrected to match the filename. Four new Jupyter notebooks are added demonstrating various AI agents and workflows: Gemma 2B instruction agent, predictive maintenance multi-agent workflow, Qwen2.5 instruction agent, and TinyLlama simple AI agent. These cover setup, model loading, inference, multi-agent orchestration, and saving models.

    Changes

    File(s) Change Summary
    examples/cookbooks/Code_Analysis_Agent.ipynb Updates "Open in Colab" badge URL to match notebook filename casing; no other changes.
    examples/cookbooks/Gemma2B_Instruction_Agent.ipynb Adds notebook demonstrating data prep, inference, and saving with Gemma 2B instruction model.
    examples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb Adds notebook for multi-agent predictive maintenance workflow using PraisonAIAgents framework.
    examples/cookbooks/Qwen2_5_InstructionAgent.ipynb Adds notebook for chat-based generation with Qwen2.5-0.5B-Instruct model.
    examples/cookbooks/TinyLlama_1_1B_model_SimpleAIAgent.ipynb Adds notebook demonstrating a simple AI agent using TinyLlama 1.1B model with generation function.

    Sequence Diagram(s)

    sequenceDiagram
        participant User
        participant Notebook
        participant AI_Agent
        participant Model/Workflow
    
        User->>Notebook: Run notebook cells
        Notebook->>Model/Workflow: Setup (install, import, authenticate)
        Notebook->>AI_Agent: Define/configure agent(s) and tasks
        Notebook->>Model/Workflow: Provide input (code, prompt, data)
        Model/Workflow->>AI_Agent: Analyze/process/generate output
        AI_Agent->>Notebook: Return results
        Notebook->>User: Display structured output/results
    

    Possibly related PRs

    • MervinPraison/PraisonAI#600: Both PRs modify the same Code_Analysis_Agent notebook; this PR updates the badge URL while the other adds the notebook content, making them directly related.

    Poem

    🐇✨
    A badge corrected, links aligned,
    New agents born, their skills combined.
    Gemma chats and Qwen replies,
    TinyLlama's wisdom flies.
    Maintenance agents watch and learn,
    In notebooks fresh, the rabbits turn—
    Hopping through code, new paths discern! 🥕📚


    Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

    ❤️ Share
    🪧 Tips

    Chat

    There are 3 ways to chat with CodeRabbit:

    • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
      • I pushed a fix in commit <commit_id>, please review it.
      • Explain this complex logic.
      • Open a follow-up GitHub issue for this discussion.
    • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
      • @coderabbitai explain this code block.
      • @coderabbitai modularize this function.
    • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
      • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
      • @coderabbitai read src/utils.ts and explain its main purpose.
      • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
      • @coderabbitai help me debug CodeRabbit configuration file.

    Support

    Need help? Create a ticket on our support page for assistance with any issues or questions.

    Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

    CodeRabbit Commands (Invoked using PR comments)

    • @coderabbitai pause to pause the reviews on a PR.
    • @coderabbitai resume to resume the paused reviews.
    • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
    • @coderabbitai full review to do a full review from scratch and review all the files again.
    • @coderabbitai summary to regenerate the summary of the PR.
    • @coderabbitai generate docstrings to generate docstrings for this PR.
    • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
    • @coderabbitai resolve resolve all the CodeRabbit review comments.
    • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
    • @coderabbitai help to get help.

    Other keywords and placeholders

    • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
    • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
    • Add @coderabbitai anywhere in the PR title to generate the title automatically.

    CodeRabbit Configuration File (.coderabbit.yaml)

    • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
    • Please see the configuration documentation for more information.
    • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

    Documentation and Community

    • Visit our Documentation for detailed information on how to use CodeRabbit.
    • Join our Discord Community to get help, request features, and share feedback.
    • Follow us on X/Twitter for updates and announcements.

    coderabbitai[bot] avatar Jun 05 '25 13:06 coderabbitai[bot]

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
    🧪 No relevant tests
    🔒 Security concerns

    Sensitive information exposure:
    Multiple notebooks (Gemma2B_Instruction_Agent.ipynb, Qwen2_5_InstructionAgent.ipynb, Predictive_Maintenance_Multi_Agent_Workflow.ipynb, Code_Analysis_Agent.ipynb) contain code that requires users to directly input API keys or tokens as plaintext in the code. This is a security risk as these credentials could be accidentally committed to version control or shared. Better approaches would include using environment variables, secure credential stores, or Jupyter notebook secrets management.

    ⚡ Recommended focus areas for review

    Hardcoded Token

    The notebook contains a hardcoded placeholder for a Hugging Face token that users need to replace. Consider using a more secure approach for token management.

    "login(\"Enter your token here\")\n",
    "\n",
    
    Token Authentication

    The notebook uses a direct token input approach that requires users to manually enter their Hugging Face token. Consider implementing a more secure token handling method.

     "login(token=\"Enter your huggingface token\")\n"
    ]
    
    API Key Exposure

    The notebook contains a line where users need to enter their OpenAI API key directly in the code, which is not a secure practice for credential management.

      "os.environ['OPENAI_API_KEY'] = 'enter your api key'"
    ],
    

    qodo-code-review[bot] avatar Jun 05 '25 13:06 qodo-code-review[bot]

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    Security
    Secure API key handling

    Hardcoding API keys directly in the notebook is a security risk. Use a more
    secure approach that doesn't expose the key in the code, such as loading from
    environment variables or using a secure input method.

    examples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb [66]

    -os.environ['OPENAI_API_KEY'] = 'enter your api key'
    +# Option 1: Prompt for API key input
    +import getpass
    +os.environ['OPENAI_API_KEY'] = getpass.getpass("Enter your OpenAI API key: ")
     
    +# Option 2: Load from .env file (requires python-dotenv package)
    +# from dotenv import load_dotenv
    +# load_dotenv()  # loads API key from .env file
    +
    
    • [ ] Apply / Chat
    Suggestion importance[1-10]: 7

    __

    Why: Valid security concern about hardcoded API keys. The suggestion provides practical alternatives and accurately identifies the security risk, though it's an error handling/security suggestion which caps the score.

    Medium
    Remove hardcoded API key

    Hardcoding API keys directly in notebooks is a security risk. Consider using
    environment variables or a secure configuration file that's not committed to
    version control instead.

    examples/cookbooks/Code_Analysis_Agent.ipynb [67]

    -os.environ['OPENAI_API_KEY'] = 'your_api_key_here'
    +# Option 1: Load from environment variable
    +import os
    +# os.environ['OPENAI_API_KEY'] = 'your_api_key_here'  # Don't hardcode keys
     
    +# Option 2: Use a .env file with python-dotenv
    +# from dotenv import load_dotenv
    +# load_dotenv()  # loads variables from .env file
    +
    
    • [ ] Apply / Chat
    Suggestion importance[1-10]: 6

    __

    Why: While this is a security best practice, the code uses 'your_api_key_here' as a placeholder in a tutorial context, making it less critical than actual hardcoded keys.

    Low
    General
    Fix repository reference

    The Colab link points to a personal fork rather than the official repository.
    This could cause confusion for users and lead them to an outdated or unofficial
    version of the notebook.

    examples/cookbooks/Code_Analysis_Agent.ipynb [17]

    -[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhivyaBharathy-web/PraisonAI/blob/main/examples/cookbooks/Code_Analysis_Agent.ipynb)
    +[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PraisonAI/PraisonAI/blob/main/examples/cookbooks/Code_Analysis_Agent.ipynb)
    

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 7

    __

    Why: The Colab link points to a personal fork instead of the official PraisonAI/PraisonAI repository, which could confuse users and lead them to outdated versions.

    Medium
    Possible issue
    Add error handling

    The function doesn't handle potential CUDA out-of-memory errors that could occur
    when generating responses with large models. Add error handling to gracefully
    fall back to CPU if needed and provide feedback to the user.

    examples/cookbooks/TinyLlama_1_1B_model_SimpleAIAgent.ipynb [305-308]

     def generate_response(prompt, max_length=256):
    -    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    -    outputs = model.generate(**inputs, max_new_tokens=max_length)
    -    return tokenizer.decode(outputs[0], skip_special_tokens=True)
    +    try:
    +        inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    +        outputs = model.generate(**inputs, max_new_tokens=max_length)
    +        return tokenizer.decode(outputs[0], skip_special_tokens=True)
    +    except RuntimeError as e:
    +        if "CUDA out of memory" in str(e):
    +            print("Warning: GPU memory exceeded. Falling back to CPU.")
    +            inputs = tokenizer(prompt, return_tensors="pt")
    +            model_cpu = model.to("cpu")
    +            outputs = model_cpu.generate(**inputs, max_new_tokens=max_length)
    +            model.to(model.device)  # Move model back to original device
    +            return tokenizer.decode(outputs[0], skip_special_tokens=True)
    +        else:
    +            raise e
    

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 6

    __

    Why: This suggestion adds useful error handling for CUDA out-of-memory scenarios which is a common issue when working with large language models. The fallback to CPU processing provides graceful degradation, though as an error handling improvement it receives a moderate score.

    Low
    Fix authentication method

    The login function is called with a placeholder token value that requires user
    modification. This will cause authentication failures when users run the
    notebook without changing it. Instead, use the more secure login() without
    parameters to prompt for token input.

    examples/cookbooks/Gemma2B_Instruction_Agent.ipynb [356]

    -login(token="Enter your token here")
    +login()  # Will prompt for token input when needed
    

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 6

    __

    Why: The suggestion correctly identifies that using a placeholder token will cause authentication failures, but there's a minor discrepancy in the existing_code format that doesn't exactly match the actual code structure.

    Low
    • [ ] Update

    qodo-code-review[bot] avatar Jun 05 '25 13:06 qodo-code-review[bot]

    Codecov Report

    All modified and coverable lines are covered by tests :white_check_mark:

    Project coverage is 16.43%. Comparing base (60fd485) to head (70057d9). Report is 77 commits behind head on main.

    Additional details and impacted files
    @@           Coverage Diff           @@
    ##             main     #608   +/-   ##
    =======================================
      Coverage   16.43%   16.43%           
    =======================================
      Files          24       24           
      Lines        2160     2160           
      Branches      302      302           
    =======================================
      Hits          355      355           
      Misses       1789     1789           
      Partials       16       16           
    
    Flag Coverage Δ
    quick-validation 0.00% <ø> (ø)
    unit-tests 16.43% <ø> (ø)

    Flags with carried forward coverage won't be shown. Click here to find out more.

    :umbrella: View full report in Codecov by Sentry.
    :loudspeaker: Have feedback on the report? Share it here.

    :rocket: New features to boost your workflow:
    • :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
    • :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

    codecov[bot] avatar Jun 05 '25 14:06 codecov[bot]