Introduction
The selected intern’s day-to-day responsibilities center on building and supporting an agentic AI platform through practical development work. The role includes designing AI agents in Python, creating multi-agent workflows with frameworks like LangGraph, and working with Large Language Models such as Gemini to generate and optimize PySpark code from natural language prompts. It also involves developing and testing backend APIs with FastAPI, while collaborating with engineers to build the MVP of Project Avaloka. Taken together, these responsibilities show a focus on planning, code generation, simulation, execution, automation, and backend service development.
Designing and Developing AI Agents in Python
A major part of the role is to design and develop AI agents using Python. These agents are intended for several connected tasks, including planning, code generation, simulation, and execution. That means the work is not limited to one stage of a workflow. Instead, it spans the full path from deciding what needs to happen to carrying out the action itself.
The emphasis on planning suggests that the AI agents are expected to help organize work before execution begins. Code generation points to the creation of code through agent behavior, while simulation and execution indicate that the agents are also involved in testing or carrying out actions in a controlled way. Because all of this is done in Python, the role is centered on a language commonly used for AI-related development work. The responsibilities therefore combine logic, automation, and implementation in one development flow.
Core focus areas in agent development
- Planning for what the agent should do
- Code generation through AI agent behavior
- Simulation of tasks or outcomes
- Execution of the intended actions
The structure of these responsibilities suggests a workflow-oriented approach. Rather than building isolated scripts, the work is aimed at creating agents that can participate in a sequence of actions. This makes the Python-based agent development part of the role especially important for the broader platform being built. It also connects directly to the goal of creating an agentic system that can support more than one kind of task.
The role includes designing AI agents in Python for planning, code generation, simulation, and execution tasks.
Because the responsibilities are described as day-to-day work, this is not a one-time assignment. It is an ongoing development effort that likely requires repeated refinement of the agents as the platform evolves. The agent-building work is therefore a central part of the intern’s contribution to the project.
Building Multi-Agent Workflows with LangGraph
Another important responsibility is to build multi-agent workflows using frameworks like LangGraph. These workflows are meant to automate data engineering pipelines, which places the work at the intersection of AI agents and data processing. The use of a multi-agent approach suggests that different agents may handle different parts of a workflow, rather than relying on a single agent to do everything.
LangGraph is specifically mentioned as a framework used for this purpose. In the context provided, it is part of the tooling used to create workflows that can coordinate multiple agents. This makes the workflow design aspect highly relevant to the overall platform, since automation is a key objective. The responsibility is not just to create agents, but to connect them in a way that supports structured pipeline automation.
What the workflow work involves
- Using LangGraph as a framework
- Creating multi-agent workflows
- Automating data engineering pipelines
- Supporting coordinated task handling across agents
This part of the role expands the scope of the work beyond individual agent behavior. It focuses on how agents interact and how their actions can be organized into a larger system. That is important for automation because a pipeline often requires multiple steps to happen in sequence or in coordination. The workflow design therefore helps turn separate AI capabilities into a usable engineering process.
The mention of data engineering pipelines also shows that the work is tied to practical technical operations. The intern is not only building AI components in isolation, but also helping shape how those components can support engineering tasks. This makes the workflow-building responsibility a bridge between AI logic and data-focused automation.
Read More: Free ChatGPT Tutorial
Read More: Google FREE ML Course 2026 for College Students, Certificate Included – Apply Now
Working with LLMs to Generate and Optimize PySpark Code
The role also includes working with Large Language Models, specifically Gemini, to generate and optimize PySpark code from natural language prompts. This responsibility highlights the use of AI to translate human instructions into code. It also shows that the intern’s work involves not only generation, but also optimization, which means improving the code produced through the model.
The phrase “from natural language prompts” is important because it indicates that the input to the system comes from plain-language requests. The model then helps produce PySpark code based on that input. This makes the task highly relevant to automation and accessibility, since it connects human language with technical implementation. The optimization part adds another layer, showing that the output is not simply accepted as-is, but refined for better use.
Key elements of the LLM-based coding work
- Using Gemini as the LLM
- Generating PySpark code
- Optimizing code after generation
- Working from natural language prompts
This responsibility is especially significant because it combines AI assistance with data engineering code. PySpark is directly named, so the work is focused on a specific technical output rather than a general coding task. The intern is expected to help shape how the model responds to prompts and how the resulting code is improved. That makes the role both practical and iterative.
The use of LLMs in this way also fits the broader theme of the position: building agentic systems that can understand, generate, and execute work. In this case, the model serves as a tool for code creation and refinement. The responsibility therefore supports both the AI side of the platform and the engineering side of the workflow.
The work includes using Gemini to generate and optimize PySpark code from natural language prompts.
Developing and Testing Backend APIs with FastAPI
In addition to AI agent and workflow development, the role includes developing and testing backend APIs using FastAPI for AI services. This means the intern is also involved in the service layer that supports the AI system. Backend APIs are essential for connecting components and making services accessible, and FastAPI is the framework named for this work.
The inclusion of testing shows that the responsibility is not limited to writing API code. It also includes checking that the APIs work as intended. This is important in a project that combines agents, workflows, and LLM-based code generation, because the backend needs to support those functions reliably. The API work therefore helps make the AI services usable within the broader platform.
Backend API responsibilities
- Developing backend APIs
- Testing backend APIs
- Using FastAPI
- Supporting AI services
This part of the role shows that the intern’s work is not purely experimental or model-focused. It also includes practical backend engineering. By building APIs for AI services, the intern helps create the interface through which the system can operate. That makes the backend work a necessary part of turning AI capabilities into a functioning platform.
The combination of development and testing suggests a careful approach to implementation. APIs need to be built and then verified, especially when they support services tied to agent behavior and code generation. In that sense, the FastAPI responsibility complements the rest of the role by providing the technical foundation for delivery.
Read More: Deloitte Australia | Data Analytics | Forage
Read More: Tata Free Data Analytics Virtual Experience Program 2026
Collaborating on the MVP of Project Avaloka
The final major responsibility is to collaborate with engineers to build the MVP of Project Avaloka, described as an agentic AI platform. This makes collaboration a key part of the role, not just individual development. The intern is expected to work alongside engineers while contributing to the creation of the MVP, which is the initial version of the platform.
Project Avaloka is identified as an agentic AI platform, which connects directly to the other responsibilities in the role. The work on AI agents, multi-agent workflows, LLM-based code generation, and backend APIs all supports this broader project. The MVP focus suggests that the work is aimed at bringing the platform into a usable early form. That makes the intern’s contributions part of a foundational build effort.
How the collaboration fits the role
- Working with engineers
- Supporting the MVP build
- Contributing to Project Avaloka
- Helping shape an agentic AI platform
Collaboration is especially important in a project that combines multiple technical layers. The intern is not working on a single isolated feature, but on a platform that includes agents, workflows, code generation, and APIs. Working with engineers helps align these pieces into one product direction. The MVP context also suggests a focus on building the core version first, rather than a fully expanded system.
This chapter brings together the earlier responsibilities into one project goal. The AI agents, LangGraph workflows, Gemini-based PySpark generation, and FastAPI backend all appear to support the same platform effort. Project Avaloka is therefore the central destination for the work described in the responsibilities.
Frequently Asked Questions
What are the intern’s main day-to-day responsibilities?
The intern’s day-to-day responsibilities include designing and developing AI agents using Python, building multi-agent workflows with frameworks like LangGraph, working with LLMs such as Gemini to generate and optimize PySpark code, developing and testing backend APIs using FastAPI, and collaborating with engineers to build the MVP of Project Avaloka.
What tasks are the AI agents expected to handle?
The AI agents are expected to support planning, code generation, simulation, and execution tasks. These responsibilities show that the agents are meant to take part in multiple stages of a workflow, from deciding what should happen to carrying out the action itself.
What is the purpose of using LangGraph in this role?
LangGraph is mentioned as a framework used to build multi-agent workflows. In this role, those workflows are designed to automate data engineering pipelines. The framework helps connect multiple agents so they can work together in a structured way.
How is Gemini used in the work?
Gemini is used as a Large Language Model to generate and optimize PySpark code from natural language prompts. This means the model helps turn plain-language input into code, and the output is then improved through optimization.
What backend work is included in the role?
The role includes developing and testing backend APIs using FastAPI for AI services. This means the intern works on the service layer that supports the AI system and checks that the APIs function as intended.
What is Project Avaloka?
Project Avaloka is described as an agentic AI platform. The intern collaborates with engineers to build its MVP, and the other responsibilities in the role support that platform-building effort.
Conclusion
This role brings together several connected areas of AI and backend development. The intern works on Python-based AI agents, multi-agent workflows with LangGraph, Gemini-based PySpark code generation and optimization, and FastAPI backend APIs. All of these responsibilities support collaboration with engineers on the MVP of Project Avaloka, an agentic AI platform. The overall picture is one of practical, hands-on work across planning, automation, code generation, simulation, execution, and service development. Together, these tasks define a role focused on building a functional AI system from multiple technical parts.








