Git version control: part 2

With help of this article, you made your first steps with Git, the version control software. You learned to commit your software so that it became version-controlled. You need just two more skills: work with remote repositories, and to check out a particular version of your files (not necessarily the newest one). Learn those two things , and you’re good to go. This text, by … Continue reading Git version control: part 2

Data Engineering + Data Science: building the full stack

This article is part of Big Data in 30 hours course material, meant as reference for the students. In our class we have looked at a number of Data Engineering and Data Science technologies. You may be wondering how they play together. It is now time to build an end-to-end workflow, resembling the production environments. An example data streaming architecture In Data Science, we primarily … Continue reading Data Engineering + Data Science: building the full stack

Git version control: concise introduction

This article is part of Big Data in 30 Hours lectures series and is intended to serve as reference material for students. However, I hope others can also benefit. Why do we need version control in Data Science? Working with data is similar to working with software. A Data Scientist developing source code and data for the models needs similar basic tooling that regular software developers … Continue reading Git version control: concise introduction

Should justice use AI?

Should the widely understood justice system (courts, police, penitentiary system, and the related government agencies), be banned by law from collecting Big Data and using Artificial Intelligence? Back in 2013,  I was part of a data analytics project for State Police. On April Fool’s we received a hilarious hoax – an obviously fake internal announcement that police analytics could now predict crimes before they actually … Continue reading Should justice use AI?

Lecture notes: an intro to Apache Spark programming

In Lecture 7 of our Big Data in 30 hours class, we discussed Apache Spark and did some hands-on programming. The purpose of this memo is to summarize the terms and ideas presented. Apache Spark is the currently one of the most popular platforms for parallel execution of computing jobs in a distributed environment. The idea is not new. Starting in the late 1980’s, the HPC (high … Continue reading Lecture notes: an intro to Apache Spark programming

Top 10 Data Infrastructure technologies for a Data Scientist

We are in the middle of this semester’s Big Data in 30 Hours class. We just did lecture 7 out of 15. So far we covered Relational Databases, Data Warehousing, BI (Tableau), NoSQL (MongoDB and ElasticSearch), Hadoop,  HDFS and Apache Spark. While we are about to move to Data Science in Python (with Numpy, Scikit-learn, Keras and TensorFlow), I received valuable feedback from the students. … Continue reading Top 10 Data Infrastructure technologies for a Data Scientist

Graph Databases: Cosmos DB Graph API – Key Concepts and Best Practices

The purpose of this post is to recap the most important points from recent Big Data in 30 hours Lecture 5. What is a graph? Vertices – Vertices denote discrete objects, such as a person, a place, or an event. Edges – Edges denote relationships between vertices. For example, a person might know another person, be involved in an event, and recently been at a … Continue reading Graph Databases: Cosmos DB Graph API – Key Concepts and Best Practices

Lecture notes: first steps in Hadoop

In Lecture 6 of our Big Data in 30 hours class, we talk about Hadoop. The purpose of this memo is to summarize the terms and ideas presented. About Hadoop Hadoop by Apache Software Foundation is a software used to run other software in parallel. It is a distributed batch processing system that comes together with a distributed filesystem. It scales well over commodity hardware and … Continue reading Lecture notes: first steps in Hadoop

Installing Oracle Database on Windows 10

In Lecture 5 and 6 of our Big Data in 30 Hours hands-on class, we will be experimenting with Data Warehousing and ETL. Our classes are hands-on. Prior to the class you need to install Oracle Database and the client utility Oracle SQL Developer. Here is the brief instruction how to do it on your laptop. I wrote this summary because I found the Oracle documents too detailed. Continue reading “Installing Oracle Database on Windows 10”