Duration: 7 Months
Location: Austin, TX (onsite in a hybrid model)
Job Description:
The client is seeking a GenAI/LLM Trust & Safety Consultant, who can quickly ramp up and contribute to client's Responsible AI initiatives by designing and leading strategy and testing efforts to identify and mitigate AI challenges across various client's products. The ideal candidate will help test AI products, analyze risk areas, and summarize findings for cross-functional partners, ensuring client's AI systems align with responsible use principles.
You will be part of a highly motivated, collaborative team, working with groups across the company to analyze how products impact users and broader society, and leveraging data to provide solutions.
Overall Responsibilities:
Conduct risk testing and product assessments across AI and generative AI features prior to launch.
Identify and analyze potential risks, especially in sensitive content domains.
Summarize and report findings in clear, structured documentation for product stakeholders.
Partner with cross-functional teams (including Product, Legal, and Policy) to communicate risk implications and recommend mitigations.
Support Responsible AI strategy development through testing insights, documentation, and policy alignment.
Adapt quickly to changing team needs and priorities in a fast-evolving AI landscape.
Participate in team discussions and ad hoc projects that strengthen risk governance and product responsibility.
Required Experience (Mandatory):
4 years of experience in data analytics, Trust & Safety, policy, cybersecurity, or related fields.
Familiarity or exposure to Large Language Models (LLMs) or Generative AI concepts.
Demonstrated ability to analyze complex information and identify potential risks in technical or policy contexts.
Strong problem-solving, critical thinking, and attention to detail.
Excellent written and verbal communication skills with the ability to summarize complex issues clearly.
Comfortable working independently in an ambiguous and fast-changing environment.
Desired Experience (Nice to have):
Prior experience working in a technology company's Trust & Safety or Policy organization.
General understanding of risk frameworks or responsible AI principles.
Education:
About US Tech Solutions:
US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ( .
US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
...We are currently looking for a Family Medicine /Internal Medicine physician for Hospitalist locum tenens coverage in Clyde, North Carolina. Details For this Opportunity: -Schedule: Will consider any availability , Days/Days and nights/Nocturnist/Swings , Rounding and...
...Description Job Description Reporting to the Driver Supervisor, the Delivery Driver, is a motivated and detail-oriented individual who wants to be a part of our dynamic team. As a vital member... ...deliveries of aftermarket collision auto parts to our valued customers. In...
...Customer Service / Remote] - Anywhere in U.S. / Up to $20 per hour - As a Customer Support Rep you'll: Answer inbound calls, emails, and chats, and take required actions to assist the customers; Anticipate customers' potential needs, and determine appropriate response;...
...Job Description Description: As a Trust & Safety Analyst, you drive impact that changes the way T&S operates. You focus on primarily strategic components of projects, delivering detailed data analyses (including insights and recommendations) that drive decisions...
ProDrivers is seeking experienced Teams Truck Drivers to join our fleet, ensuring timely transportations and delivery of goods to our clients throughout the nation. At ProDrivers, we commit to the highest standards of safety, compliance, and efficiency. Drivers in this...