Skip to content

Build a Crew to Tailor your Resume to Job Descriptions

Live version

Note: If you want to modify your resume for a specific job application, visit the link: Resume Creator. This tool implements an extended version of the CrewAI logic explained here and helps you build market-ready resumes in minutes.

Introduction

Generative AI has moved beyond simple chat interfaces. We now orchestrate multiple AI "agents" to work together, solving complex workflows that a single model struggles to handle. This guide explores CrewAI, a framework for orchestrating role-playing autonomous AI agents.

We will deconstruct a real-world example: building a crew to tailor a resume for a specific job posting. By the end, you will understand how to build your own multi-agent system.

import os
os.environ["OPENAI_API_KEY"] = 'sk-proj-....'
os.environ["OPENAI_MODEL_NAME"] = 'gpt-3.5-turbo'
os.environ["SERPER_API_KEY"] = '..' # get here https://serper.dev/api-keys

Core Concepts: The Building Blocks

CrewAI relies on three fundamental pillars: Agents, Tasks, and Tools, orchestrated by a Crew.

from crewai import Agent, Task, Crew

Tools

Tools are the skills an agent uses to interact with the world. Without tools, an agent is just a text generator. In this workflow, our agents use:
1. ScrapeWebsiteTool: To read job descriptions.
2. SerperDevTool: To search the internet for company details.
3. FileReadTool: To read the candidate's existing resume.
4. MDXSearchTool: To perform semantic searches on markdown files.
5. ExperienceCalculatorTool: Calculates years of experience from joining dates.

Three Key Elements of Tools

Tools in CrewAI are designed for robustness:
1. Versatility: Tools can accept various input types (strings, dictionaries) and adapt to different agent needs.
2. Fault Tolerance: If a tool fails (e.g., a scraper gets a 404 error), the agent can retry or handle the error gracefully rather than crashing the workflow.
3. Caching: Results from tools are cached. If an agent searches for "Amazon AI jobs" twice, the second result is fetched from cache, saving API costs and time.

# Importing built in tools
from crewai_tools import (
  FileReadTool,
  ScrapeWebsiteTool,
  MDXSearchTool,
  SerperDevTool
)
scrape_tool = ScrapeWebsiteTool()
search_tool = SerperDevTool()
read_resume = FileReadTool(file_path='harsharesume.md')
semantic_search_resume = MDXSearchTool(mdx='harsharesume.md')

Creating a new tool

While CrewAI comes packed with powerful built-in tools like FileReadTool, ScrapeWebsiteTool, MDXSearchTool, and SerperDevTool, real-world scenarios often demand specific, custom logic.
The Experience Calculator tool will precisely calculate a candidate's total years of experience based on their start and end dates.
To create a custom tool in CrewAI, we simply extend the BaseTool class. We define a unique name and description for the tool, and then implement our logic within the _run function. This allows our agents to execute Python code directly to solve problems that standard tools cannot address.

from crewai.tools import BaseTool
from pydantic import BaseModel, Field
from datetime import datetime

# Define the input schema for the tool
class ExperienceInput(BaseModel):
    start_date: str = Field(description="Start date of the job in YYYY-MM-DD format")
    end_date: str = Field(description="End date of the job in YYYY-MM-DD format. Use 'Present' if currently employed.")

# Define the custom tool class
class ExperienceCalculatorTool(BaseTool):
    name: str = "Experience Calculator"
    description: str = "Calculates the total years of experience given a start and end date."
    args_schema: type[BaseModel] = ExperienceInput

    def _run(self, start_date: str, end_date: str) -> str:
        try:
            start = datetime.strptime(start_date, "%Y-%m-%d")
            if end_date.lower() == 'present':
                end = datetime.now()
            else:
                end = datetime.strptime(end_date, "%Y-%m-%d")

            years = (end - start).days / 365.25
            return f"{years:.1f} years"
        except ValueError:
            return "Error: Invalid date format. Please use YYYY-MM-DD."

# Initialize the custom tool
calc_experience = ExperienceCalculatorTool()

Agents

An agent is an autonomous unit programmed to perform tasks, make decisions, and use tools. Think of it as a team member with a specific job title.
In our resume example, we define four distinct agents:
1. Tech Job Researcher: Analyses job postings to extract requirements.
2. Personal Profiler: Compiles comprehensive profiles from GitHub and personal write-ups.
3. Resume Strategist: Aligns the resume with the job requirements.
4. Interview Preparer: Creates talking points based on the tailored resume.

The Six Key Elements of High-Performance Agents

To ensure agents perform better than standard LLM calls, CrewAI emphasises six elements:
1. Role Playing: We assign specific roles (e.g., "Resume Strategist") rather than asking a generic assistant. This narrows the model's focus.
2. Focus: By defining a goal and backstory, agents stay in character. The Profiler’s goal is specifically to "do incredible research on job applicants, keeping it from drifting into irrelevant data.
3. Tools: Agents have access to specific tools that both help them and restrict them in providing suitable responses.
4. Cooperation: Agents share context through the tasks they perform. The Resume Strategist task explicitly waits for the outputs (using the `context') of the Researcher and Profiler tasks before starting, ensuring it has all the necessary data.
5. Guardrails: (Implicit in framework) Prevents agents from hallucinating by enforcing tool usage and structured inputs.
6. Memory: Agents retain short-term memory of the current execution and long-term memory of past interactions to avoid repetition.

# Agent 1: Researcher
researcher = Agent(
    role="Tech Job Researcher",
    goal="Make sure to do amazing analysis on job posting to help job applicants",
    tools = [scrape_tool, search_tool],
    verbose=True,
    backstory=(
        "As a Job Researcher, your prowess in navigating and extracting critical information from job postings is unmatched."
        "Your skills help pinpoint the necessary qualifications and skills sought by employers, forming the foundation for effective application tailoring."
    )
)
# Agent 2: Profiler
profiler = Agent(
    role="Personal Profiler for Engineers",
    goal="Do incredible research on job applicants to help them stand out in the job market",
    tools = [scrape_tool, search_tool, read_resume, semantic_search_resume, calc_experience],
    verbose=True,
    backstory=(
        "Equipped with analytical prowess, you dissect and synthesise information from diverse sources to craft comprehensive personal and professional profiles, laying the groundwork for personalized resume enhancements."
    )
)

Delegation is the ability of an agent to hand off a task or ask a question of another agent to complete its own assignment. The Resume Strategist might find that the Personal Profiler missed a specific detail about a candidate. Instead of failing or hallucinating, the Strategist can "delegate" a sub-task back to the Profiler to fetch that specific data. This is enabled by allow_delegation.

# Agent 3: Resume Strategist
resume_strategist = Agent(
    role="Resume Strategist for Engineers",
    goal="Find all the best ways to make a resume stand out in the job market.",
    tools = [scrape_tool, search_tool, read_resume, semantic_search_resume],
    verbose=True,
    backstory=(
        "With a strategic mind and an eye for detail, you excel at refining resumes to highlight the most relevant skills and experiences, ensuring they resonate perfectly with the job's requirements."
    ),
    allow_delegation=True # Enables the agent to talk to others
)
# Agent 4: Interview Preparer
interview_preparer = Agent(
    role="Engineering Interview Preparer",
    goal="Create interview questions and talking points based on the resume and job requirements",
    tools = [scrape_tool, search_tool,
             read_resume, semantic_search_resume],
    verbose=True,
    backstory=(
        "Your role is crucial in anticipating the dynamics of interviews."
        "With your ability to formulate key questions and talking points, you prepare candidates for success, ensuring they can confidently address all aspects of the job they are applying for."
    )
)

Tasks

A task is a specific assignment given to an agent. It must contain three critical components:
1. Description: A clear instruction of what needs to be done.
2. Expected Output: A definition of what the final result should look like.
3. Agent: The specific team member assigned to this task.

For example, the Research Task asks the agent to "Analyze the job posting URL... to extract key skills" and expects "A structured list of job requirements".

# Task for Researcher Agent: Extract Job Requirements
research_task = Task(
    description=(
        "Analyse the job posting URL provided ({job_posting_url}) to extract key skills, experiences, and qualifications required."
        "Use the tools to gather content and identify and categorise the requirements."
    ),
    expected_output=(
        "A structured list of job requirements, including necessary "
        "skills, qualifications, and experiences."
    ),
    agent=researcher,
    async_execution=True
)

Async Execution: Tasks can run in parallel. In our example, the Research Task and Profile Task both have async_execution=True. This means the system researches the job and the candidate simultaneously, cutting total execution time.

# Task for Profiler Agent: Compile Comprehensive Profile
profile_task = Task(
    description=(
        "Compile a detailed personal and professional profile using the GitHub ({github_url}) URLs, and personal write-up ({personal_writeup})."
        "Utilise tools to extract and synthesise information from these sources."
    ),
    expected_output=(
        "A comprehensive profile document that includes skills, project experiences, contributions, interests, and communication style."
    ),
    agent=profiler,
    async_execution=True
)

Output File: We can save agent outputs directly to local files, such as tailored_resume.md, for easy access.
Human Feedback: We can configure agents to ask for human approval before finalising a task, ensuring quality control on critical steps.
Context: The context parameter is used in tasks to pass data between agents:
- We can pass a list of tasks as context to a task.
- The task then takes into account the output of those tasks in its execution.
- The task will not run until it has the output(s) from those tasks.

# Task for Resume Strategist Agent: Align Resume with Job Requirements
resume_strategy_task = Task(
    description=(
        "Using the profile and job requirements obtained from previous tasks, tailor the resume to highlight the most relevant areas."
        "Employ tools to adjust and enhance the resume content."
        "Make sure this is the best resume even but don't make up any information."
        "Update every section, including the initial summary, work experience, skills, and education."
        "All to better reflect the candidate's abilities and how it matches the job posting."
    ),
    expected_output=(
        "An updated resume that effectively highlights the candidate's qualifications and experiences relevant to the job."
    ),
    output_file="tailored_resume.md",
    context=[research_task, profile_task],
    agent=resume_strategist,
    human_feedback=False
)
# Task for Interview Preparer Agent: Develop Interview Materials
interview_preparation_task = Task(
    description=(
        "Create a set of potential interview questions and talking points based on the tailored resume and job requirements. "
        "Utilise tools to generate relevant questions and discussion points."
        "Make sure to use these questions and talking points to help the candidate highlight the main points of the resume and how it matches the job posting."
    ),
    expected_output=(
        "A document containing key questions and talking points that the candidate should prepare for the initial interview."
    ),
    output_file="interview_materials.md",
    context=[research_task, profile_task, resume_strategy_task],
    agent=interview_preparer
)

Crew

The Crew is the manager. It brings agents and tasks together and dictates the process flow (sequential or hierarchical) to achieve the final outcome.

job_application_crew = Crew(
    agents=[researcher,
            profiler,
            resume_strategist,
            interview_preparer],

    tasks=[research_task,
           profile_task,
           resume_strategy_task,
           interview_preparation_task],

    verbose=True
)
job_application_inputs = {
    'job_posting_url': 'https://amazon.jobs/en/jobs/3102794/sr-ai-ml-consultant-tech-industry-tech-industry',
    'github_url': 'https://github.com/HarshaAsh',
    'personal_writeup': """Harsha Achyuthuni is a seasoned data science professional with over 8 years of experience, currently serving as a Manager at Deloitte. He likes solving complex business problems by integrating data, statistics, technology and business understanding. With a Master's in Business Analytics from Imperial College London and executive education from IIM Bangalore, Harsha has consistently delivered impactful AI and ML solutions across diverse industries, including aerospace, FMCG, pharmaceuticals, insurance, retail, agriculture and manufacturing. His expertise spans predictive maintenance, demand forecasting, optimisation engines, supply chain analytics and cutting-edge generative AI applications."""
}

### this execution will take a few minutes to run
result = job_application_crew.kickoff(inputs=job_application_inputs)
png ...
some more lines
...

from IPython.display import Markdown, display
display(Markdown("tailored_resume.md"))
png

display(Markdown("./interview_materials.md"))
png

CrewAI transforms static AI interactions into a dynamic, collaborative workforce. By defining specific roles, providing the right tools, and managing context through tasks, you can automate complex professional workflows with high precision.

References

  1. DeepLearning course on CrewAI
Back to top