Application Form

SyllogisTeks is Your Source for
Staffing in St. Louis and Beyond

headline

Let us help you find the right position for your skillset! Our areas of expertise include business analysis, infrastructure management, information security, development, systems administration and more. Take a minute to search our current job opportunities or contact us directly for more personalized support.

If you're in search of talent for your organization please click here.

Search

Featured Job Opportunities

To be considered for future job opportunities, apply here.

Application Form

IT Cloud Engineer

gRPC, Kafka, GO, TDD, Rest API

Remote Full-Time (Out of town candidates welcome)

Alternate Text

Job ID: 21187

jobtype Contract

$51.87 - $103.73

*This salary range is merely an estimate and may vary based on an applicant’s location, market conditions, skills, prior relevant experience, certain degrees and certifications, and other relevant factors.

The mission of Bayer Crop Science is centered on developing agricultural solutions for a sustainable future that will include a global population projected to eclipse 9.6 billion by 2050. We approach agriculture holistically, looking across a broad range of solutions from using biotechnology and plant breeding to produce the best possible seeds, to advanced predictive and prescriptive analytics designed to select the best possible crop system for every acre.

To make this possible, Bayer collects terabytes of data across all aspects of its operations, from genome sequencing, crop field trials, manufacturing, supply chain, financial transactions and everything in between. There is an enormous need and potential here to do something that has never been done before. We need great people to help transform these complex scientific datasets into innovative software that is deployed across the pipeline, accelerating the pace and quality of all crop system development decisions to unbelievable levels.

What you will do is why you should join us:
• Be a critical senior member of a data engineering team focused on creating distributed analysis capabilities around a large variety of datasets
• Take pride in software craftsmanship, apply a deep knowledge of algorithms and data structures to continuously improve and innovate
• Work with other top-level talent solving a wide range of complex and unique challenges that have real world impact
• Mentor and guide other engineers on areas of expertise
• Explore relevant technology stacks to find the best fit for each dataset
• Lead technical initiatives, communicating your technical vision and strategy to the broader organization
• Project your talent into relevant projects. Strength of ideas trumps position on an org chart
• Pursue opportunities to present our work at relevant technical conferences
o Google Cloud Next 2019: https://www.youtube.com/watch?v=fqvuyOID6v4
o Google Cloud Next 2024: https://www.youtube.com/watch?v=iafduXqwfMs
o GraphConnect 2015: https://www.youtube.com/watch?v=6KEvLURBenM
o Google Cloud Blog:
? https://cloud.google.com/blog/products/containers-kubernetes/google-kubernetes-engine-clusters-can-have-up-to-15000-nodes
? https://cloud.google.com/blog/products/databases/bayer-uses-alloydb
o Cloud Wars: https://cloudwars.com/cloud/how-google-clouds-alloydb-empowers-bayer-crop-science-to-overcome-data-challenges-cloud-wars-live/
o Foss4G: https://talks.osgeo.org/foss4g-europe-2025/talk/J8PKC7

If you share our values, you should have:
• A track record of shipping and maintaining multiple major product releases
• Proven experience developing and launching a software product or significant feature written in Go
• Proven experience building and maintaining data-intensive APIs using a RESTful approach
• Experience with stream processing using Apache Kafka
• A level of comfort with Unit Testing and Test Driven Development methodologies
• Familiarity with creating and maintaining containerized application deployments with a platform like Docker
• Familiarity with deploying to and working with Kubernetes cluster infrastructure
• A proven ability to build and maintain cloud based infrastructure on a major cloud provider like AWS, Azure or Google Cloud Platform
• Experience data modeling for large scale databases, either relational or NoSQL
• Proficiency in verbal and written English language, capable of connecting with diverse individuals, actively listening to their needs, and supporting meaningful analysis for better decision-making.
Bonus points for:
• Experience with protocol buffers and gRPC
• Experience with: Google Cloud Platform, Apache Beam and or Google Cloud Dataflow, Google Kubernetes Engine
• Experience working with scientific datasets, or a background in the application of quantitative science to business problems
• Bioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation

Sr Full Stack Node.js Developer

ReactJS, Javascript, Postgresql, NodeJS, AWS

Chesterfield, MO — Hybrid (Blend of Onsite/Work from Home)

Sr. Project Manager

Project Manager, Mechanical Engineering

Bentonville, AR — Job Onsite

Design Engineer

HVAC, Electrical Engineering, CADD

Valley Park, MO — Job Onsite

SCADA and Controls Engineer

Ignition by Inductive Automation implementation , PLC, SCADA

Northridge, CA — Job Onsite

CAD Technician (Expert)

Leica Point Cloud, Auto CAD, Manufacturing

Muscatine, IA — Hybrid (Blend of Onsite/Work from Home)

IT Data Engineer (Mid)

NoSQL, Feenix AI, AWS, Single/multi-page Web-based UI, Rest API

Chesterfield, MO — Hybrid (Blend of Onsite/Work from Home)

Project Instrumentation Engineer

Bachelor Degree, Instrument Engineering, MS Office

Luling, LA — Job Onsite

Manufacturing Engineer

Manufacturing Techniques, Metal Fabrication

St. Louis, MO — Job Onsite

Project Manager

CAD, Manufacturing

St. Louis, MO — Job Onsite

Realiability Engineer

Mechanical Troubleshooting, MS Office

Whitestown, IN — Job Onsite

Project Manager - Data Center

Construction Project Management, HVAC

St. Louis, MO — Job Onsite

Robotics Software Engineer (Python)

Python, SQL Query Development, Bachelor Degree, JSON API

Ankeny, IA — Job Onsite

Virtual Mechanical & Plumbing Construction Manager

Mechanical/Plumbing Virtual Construction, Revit

St. Louis, MO — Job Onsite

Sr. Data Center Network Engineer

BGP, Network engineer, Data Center

Springfield, IL — Hybrid (Blend of Onsite/Work from Home)

Construction Project Manager

CAD, Engineering, Project Manager, Construction

Bentonville, AR — Job Onsite

Senior MEP Estimator

HVAC, Trimble RTS, Construction Plumbing

St. Louis, MO — Job Onsite

Director of Mechanical & Plumbing Virtual Construction

Virtual Construction, Mechanical/Plumbing Construction, Revit

St. Louis, MO — Job Onsite

IT Cloud Engineer

gRPC, Kafka, GO, TDD, Rest API

Remote Full-Time (Out of town candidates welcome)

Job ID: 21187

jobtype  Contract

$51.87 - $103.73

*This salary range is merely an estimate and may vary based on an applicant’s location, market conditions, skills, prior relevant experience, certain degrees and certifications, and other relevant factors.

The mission of Bayer Crop Science is centered on developing agricultural solutions for a sustainable future that will include a global population projected to eclipse 9.6 billion by 2050. We approach agriculture holistically, looking across a broad range of solutions from using biotechnology and plant breeding to produce the best possible seeds, to advanced predictive and prescriptive analytics designed to select the best possible crop system for every acre.

To make this possible, Bayer collects terabytes of data across all aspects of its operations, from genome sequencing, crop field trials, manufacturing, supply chain, financial transactions and everything in between. There is an enormous need and potential here to do something that has never been done before. We need great people to help transform these complex scientific datasets into innovative software that is deployed across the pipeline, accelerating the pace and quality of all crop system development decisions to unbelievable levels.

What you will do is why you should join us:
• Be a critical senior member of a data engineering team focused on creating distributed analysis capabilities around a large variety of datasets
• Take pride in software craftsmanship, apply a deep knowledge of algorithms and data structures to continuously improve and innovate
• Work with other top-level talent solving a wide range of complex and unique challenges that have real world impact
• Mentor and guide other engineers on areas of expertise
• Explore relevant technology stacks to find the best fit for each dataset
• Lead technical initiatives, communicating your technical vision and strategy to the broader organization
• Project your talent into relevant projects. Strength of ideas trumps position on an org chart
• Pursue opportunities to present our work at relevant technical conferences
o Google Cloud Next 2019: https://www.youtube.com/watch?v=fqvuyOID6v4
o Google Cloud Next 2024: https://www.youtube.com/watch?v=iafduXqwfMs
o GraphConnect 2015: https://www.youtube.com/watch?v=6KEvLURBenM
o Google Cloud Blog:
? https://cloud.google.com/blog/products/containers-kubernetes/google-kubernetes-engine-clusters-can-have-up-to-15000-nodes
? https://cloud.google.com/blog/products/databases/bayer-uses-alloydb
o Cloud Wars: https://cloudwars.com/cloud/how-google-clouds-alloydb-empowers-bayer-crop-science-to-overcome-data-challenges-cloud-wars-live/
o Foss4G: https://talks.osgeo.org/foss4g-europe-2025/talk/J8PKC7

If you share our values, you should have:
• A track record of shipping and maintaining multiple major product releases
• Proven experience developing and launching a software product or significant feature written in Go
• Proven experience building and maintaining data-intensive APIs using a RESTful approach
• Experience with stream processing using Apache Kafka
• A level of comfort with Unit Testing and Test Driven Development methodologies
• Familiarity with creating and maintaining containerized application deployments with a platform like Docker
• Familiarity with deploying to and working with Kubernetes cluster infrastructure
• A proven ability to build and maintain cloud based infrastructure on a major cloud provider like AWS, Azure or Google Cloud Platform
• Experience data modeling for large scale databases, either relational or NoSQL
• Proficiency in verbal and written English language, capable of connecting with diverse individuals, actively listening to their needs, and supporting meaningful analysis for better decision-making.
Bonus points for:
• Experience with protocol buffers and gRPC
• Experience with: Google Cloud Platform, Apache Beam and or Google Cloud Dataflow, Google Kubernetes Engine
• Experience working with scientific datasets, or a background in the application of quantitative science to business problems
• Bioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation

Application Form for Job #21187

Application Form for Job #21142

Application Form for Job #21058

Application Form for Job #21167

Application Form for Job #21128

Application Form for Job #21120

Application Form for Job #21117

Application Form for Job #21151

Application Form for Job #21156

Application Form for Job #21159

Application Form for Job #21153

Application Form for Job #21115

Application Form for Job #21107

Application Form for Job #21089

Application Form for Job #21047

Application Form for Job #21020

Application Form for Job #21077

Application Form for Job #21088