Collected thoughts on implementing Kafka data pipelines

Below are my recent notes and thoughts collected during the recent work with Kafka, to build data streaming pipelines between data warehouses and data lakes. Maybe someone will benefit. The rationale Some points on picking (or not picking) Kafka as the solution Kafka originated at LinkedIn, who remains a major user. Kafka is now an open-source Apache project Kafka is good as glue between components. … Continue reading Collected thoughts on implementing Kafka data pipelines

Introducing my Big Data orientation workshops

Data Science allows us to create models that analyze data faster and more accurately than humans. If you’re a Python programmer, you’re likely to use libraries such as TensorFlow, Keras, Scikit-learn, or Pandas to create those models. To turn those models into production machines, we need a bit of Data Engineering knwoledge. We need to define the underlying data architecture, perhaps considering as components: Apache … Continue reading Introducing my Big Data orientation workshops

Git version control: part 2

With help of this article, you made your first steps with Git, the version control software. You learned to commit your software so that it became version-controlled. You need just two more skills: work with remote repositories, and to check out a particular version of your files (not necessarily the newest one). Learn those two things , and you’re good to go. This text, by … Continue reading Git version control: part 2

Data Engineering + Data Science: building the full stack

This article is part of Big Data in 30 hours course material, meant as reference for the students. In our class we have looked at a number of Data Engineering and Data Science technologies. You may be wondering how they play together. It is now time to build an end-to-end workflow, resembling the production environments. An example data streaming architecture In Data Science, we primarily … Continue reading Data Engineering + Data Science: building the full stack

Git version control: concise introduction

This article is part of Big Data in 30 Hours lectures series and is intended to serve as reference material for students. However, I hope others can also benefit. Why do we need version control in Data Science? Working with data is similar to working with software. A Data Scientist developing source code and data for the models needs similar basic tooling that regular software developers … Continue reading Git version control: concise introduction

Should justice use AI?

Should the widely understood justice system (courts, police, penitentiary system, and the related government agencies), be banned by law from collecting Big Data and using Artificial Intelligence? Back in 2013,  I was part of a data analytics project for State Police. On April Fool’s we received a hilarious hoax – an obviously fake internal announcement that police analytics could now predict crimes before they actually … Continue reading Should justice use AI?

Lecture notes: an intro to Apache Spark programming

In Lecture 7 of our Big Data in 30 hours class, we discussed Apache Spark and did some hands-on programming. The purpose of this memo is to summarize the terms and ideas presented. Apache Spark is the currently one of the most popular platforms for parallel execution of computing jobs in a distributed environment. The idea is not new. Starting in the late 1980’s, the HPC (high … Continue reading Lecture notes: an intro to Apache Spark programming

Top 10 Data Infrastructure technologies for a Data Scientist

We are in the middle of this semester’s Big Data in 30 Hours class. We just did lecture 7 out of 15. So far we covered Relational Databases, Data Warehousing, BI (Tableau), NoSQL (MongoDB and ElasticSearch), Hadoop,  HDFS and Apache Spark. While we are about to move to Data Science in Python (with Numpy, Scikit-learn, Keras and TensorFlow), I received valuable feedback from the students. … Continue reading Top 10 Data Infrastructure technologies for a Data Scientist

Lecture notes: first steps in Hadoop

In Lecture 6 of our Big Data in 30 hours class, we talk about Hadoop. The purpose of this memo is to summarize the terms and ideas presented. About Hadoop Hadoop by Apache Software Foundation is a software used to run other software in parallel. It is a distributed batch processing system that comes together with a distributed filesystem. It scales well over commodity hardware and … Continue reading Lecture notes: first steps in Hadoop