At [BEBR], we work on a wide range of projects, ranging from analysis and predictive modelling in health care to data visualization and forecasting on satellite-based earth monitoring systems. We believe that people thrive when they work on projects that they are passionate about, so together we can find a project within our portfolio that best suits your interests and expertise. This reflects the vision of [BEBR]: focus on projects that we enjoy and stand behind as a team.
As a Data Scientist, you will be part of our Data Science team and be responsible for making data analysis and help with creating predictions. We work mostly in Python, which means that it’s a must to have demonstrable knowledge of Python and SQL (ideally PSQL). Furthermore, it’s important to have knowledge of statistical modelling like multivariate analysis (regression, principal component analysis etc) and to be familiar with at least one ML framework (e.g. Scikit-Learn or Tensorflow). Next to these, we have a few more knock out criteria:
- Experienced with machine learning (random forest, reinforcement learning, association rule learning, Deep Learning, NLP, clustering etc). Either deep experience with one or two, or general experience with many
- Experienced with data cleaning: imputation, scaling, normalization, one hot encoding, outlier flagging, feature engineering
- Experienced with hyper-parameter tuning, regularization, cross validation and other model tuning techniques
- Be willing to take ownership & responsibility for a wide range of data science tasks, from helping to source and ingest data, to training models, to working with MLOps to deploy models and integrate with wider systems.
- Comfortable with cross-department work, interacting and collaborating closely with sales, devs and MLOps teams as well as presenting ideas and findings to clients
- Be comfortable leading projects, acting as an independent worker with a proactive approach to tasks.
- Have a strong ability to communicate or explain problems/solutions
- Show willingness to learn & adapt
- Used to working with standard development practices and toolkits for collaborative coding and development (AGILE, Git, Bitbucket, Jira etc)
- Experience with working according to the Scrum framework
- Experience in data visualization tools such as matplotlib, plotly, bokeh, dash
Made it past the ‘knockout stage’? Great, now please read more :)
To maximise our efficiency and deliver meaningful software/analysis in an agile way, we work according to the scrum framework. Meaning that you will participate in sprints to deliver a release-worthy product every two weeks. During these sprints, you will work together with our other Data scientists, as well as our Java Developers and DevOps. The team welcomes criticism and open debate, since we believe that this is the best way to learn and grow as a team.
Additional knowledge and competences (nice-to-have):
- Understanding of Software design principles
- Familiarity with ML Pipelines Frameworks
- Experience with Docker
- Extensive knowledge of artificial intelligence (deep learning, RNN, CNN, etc)
- Knowledge of Image Processing, specifically the processing of satellite data
- Knowledge of Event Processing Experience and prediction
- Experience of deploying and training models on in house Kubernetes clusters: MLFlow, Tensorflow Serving, KubeFlow
- Familiarity with any ‘at scale’ ML technologies including Spark etc
- Experienced with setup/usage of Cloud ML (Azure/AWS etc)
- Expertise in a scientific discipline such as chemistry, physics, biology, medicine
Salary and benefits
- Salary is dependent on your expertise and experience
- Free lunches and drinks at the office
- Unlimited cuddles from [BEBR]’s office dog (Lui)
Do you think you are the right fit for our awesome team? Don’t think twice and send an email with your CV and a (short) motivation to firstname.lastname@example.org
Even if you don't tick all of these boxes, but still think you make a great contribution to our team, let us know why.