IND - Associate Engineer, Data
The Hartford
Posted 1 hour ago • Via www.themuse.com
Description
Job Overview
- Source: The Muse
Job Description
IND Associate Engineer, Data - GCC061
We're determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals - and to help others accomplish theirs, too. Join our team as we help shape the future.
Key Responsibilities
- Data and AI Engineer responsible for Implementing AI data pipelines that bring together structured, semi-structured and unstructured data to support AI and Agentic solutions.
- Implement efficient Retrieval-Augmented Generation (RAG) architectures and integration with enterprise data infrastructure.
- Build and maintain scalable and robust real-time data streaming pipelines using technologies such as Apache Kafka, AWS Kinesis, Spark streaming, or similar.
- Develop data domains and data products for various consumption archetypes including Reporting, Data Science, AI/ML, Analytics etc.
- Develop AI-driven systems to improve data capabilities
- Ensure the reliability, availability, and scalability of data pipelines and systems through effective monitoring, alerting, and incident management.
- Collaborate closely with DevOps and infrastructure teams to ensure seamless deployment, operation, and maintenance of data systems.
Required Skills & Experience :
- Bachelor's degree in computer science , Artificial Intelligence, or related field.
- 2 years of data engineering experience including Data solutions, SQL and NoSQL, Snowflake, ETL/ELT tools, CICD, Bigdata, Cloud Technologies (AWS/Google/AZURE), Python/Spark, Datamesh , Datalake or Data Fabric.
- Less than 2 years of experience will be considered with advanced degree & applicable internship experience.
- 1+ years' experience with cloud platforms (AWS, GCP, or Azure)
- 1+ years of data engineering experience focused on supporting AI technologies.
- 1+ years implementing AI data solutions.
- 1+ years with prompt engineering techniques for large language models.
- 1+ years in implementing Retrieval-Augmented Generation (RAG) pipelines, integrating retrieval mechanisms with language models.
- 1+ years implementing AI driven data systems supporting agentic solutions (AWS Lambda, S3, EC2, Langchain , Langgraph ).
- 1+ years of programming skills in Python
- 1+ years with building AI pipelines that bring together structured, semi-structured and unstructured data .
- 1+ years in vector databases, graph databases, NoSQL, Document DBs, including design, implementation, and optimization. (e.g., AWS open search, GCP Vertex AI, Neo4j, Spanner Graph, Neptune, Mongo, DynamoDB etc.).
- Strong written and verbal communication skills
- Able to communicate effectively with technical teams
- Team player who collaborates effectively across teams
- Strong organization and execution skills .
- Strong interpersonal and time management skills
- Ability to work successfully in a lean, agile, and fast-paced organization, leveraging Agile principles and ways of working.
- Ability to translate technical topics into business solutions and strategies
Salary & Compensation
Salary not disclosed; typically competitive for the role.
Work Arrangement
Type: On-Site
Standard business hours at the office.
Typical Interview Process
- Resume screening
- HR call
- Skill interview
- Final manager interview
- Offer
Tip: Research the company's products and culture.
Similar Roles
Specialist, Sales Enablement
DoorDash
Team Lead, Live Operations
DoorDash
Senior Stock Administrator
DoorDash