Orhan G. Yalçın

The guide to help you navigate around my content with ease.

Image for post
Image for post

As you might know, I regularly write on Medium covering topics such as Artificial Intelligence, Machine Learning, Deep Learning, Data Science, Data Visualization, TensorFlow, and other programming topics. Since the volume of my content reached a certain level, it got harder to see what I wrote about. So, I put together this guide to help you navigate around my content with ease.

I have been publishing in Towards Data Science for a long time and recently started publishing in The Startup. I publish my posts under the following series:

Table of Contents

I   - Artificial Intelligence Essentials
II - ML Programming Essentials
III - Deep Learning with TensorFlow 2.0
IV - Natural Language Processing
V - Deep Learning Case Studies
VI - Kaggle's Titanic Competition Mini-Series
VII - Non-Technical Artificial Intelligence Articles
VIII - Non-English…

Deep Learning Case Studies

Using Convolutional Neural Networks to Classify Handwritten Digits with TensorFlow and Keras | Supervised Deep Learning

If you are reading this article, I am sure that we share similar interests and are/will be in similar industries. So let’s connect via Linkedin! Please do not hesitate to send a contact request! Orhan G. Yalçın - Linkedin

Image for post
Image for post
MNIST Dataset and Number Classification by Katakoda

Before diving into this article, I just want to let you know that if you are into deep learning, I believe you should also check my other articles such as:

1 — Image Noise Reduction in 10 Minutes with Deep Convolutional Autoencoders where we learned to build autoencoders for image denoising;

2 — Predict Tomorrow’s Bitcoin (BTC) Price with Recurrent Neural Networks where we use an RNN to predict BTC prices and since it uses an API, the results always remain up-to-date.

←← PART 1 | ← PART 2 | Natural Language Processing

Learn the basics of the pre-trained NLP model, BERT, and build a sentiment classifier using the IMDB movie reviews dataset, TensorFlow, and Hugging Face transformers

I prepared this tutorial because it is somehow very difficult to find a blog post with actual working BERT code from the beginning till the end. They are always full of bugs. So, I have dug into several articles, put together their codes, edited them, and finally have a working BERT model. So, just by running the code in this tutorial, you can actually create a BERT model and fine-tune it for sentiment analysis.

Image for post
Image for post
Figure 1. Photo by Lukas on Unsplash

Natural language processing (NLP) is one of the most cumbersome areas of artificial intelligence when it comes to data preprocessing. Apart from the preprocessing and tokenizing text datasets, it takes a lot of time to train successful NLP models. But today is your lucky day! …

Machine Learning Programming Essentials

Learn whether iPython, Jupyter Notebook, and Google Colab are Rivals or Complimentary Tools; Understand Their Relationship

Image for post
Image for post
Figure 1. iPython, Jupyter Notebook, Google Colab Comparison (Figure by Author)

Colaboratory, or Colab for short, is a Google Research product, which allows developers to write and execute Python code through their browser. Google Colab is an excellent tool for deep learning tasks. It is a hosted Jupyter notebook that requires no setup and has an excellent free version, which gives free access to Google computing resources such as GPUs and TPUs.

In this post, we will cover three topics:

1 — Interactive Python Programming Environments: Python, Jupyter Notebook, and Google Colab;

2–4 Additional Features of Google Colab over Jupyter Notebook; and

3 — How to Create a Google Colab Notebook in 5 Easy Steps. …


Learn the Basics of Text Vectorization, Create a Word Embedding Model trained with a Neural Network on IMDB Reviews Dataset, and Visualize it with TensorBoard Embedding Projector

Image for post
Image for post
Figure 1. Photo by Raphael Schaller on Unsplash

This is a follow-up tutorial prepared after Part I of the tutorial, Mastering Word Embeddings in 10 Minutes with TensorFlow, where we introduce several word vectorization concepts such as One Hot Encoding and Encoding with a Unique ID Value. I would highly recommend you to check this tutorial if you are new to natural language processing.

In Part II of the tutorial, we will vectorize our words and trained their values using the IMDB Reviews dataset. This tutorial is our own take on TensorFlow’s tutorial on word embedding. We will train a word embedding using a simple Keras model and the IMDB Reviews dataset. …

Natural Language Processing — Part 1 | Part 2 →

Covering the Basics of Word Embedding, One Hot Encoding, Text Vectorization, Embedding Layers, and an Example Neural Network Architecture for NLP

Listen to the Audio Version
Image for post
Image for post
Photo by Nick Hillier on Unsplash

Word embedding is one of the most important concepts in Natural Language Processing (NLP). It is an NLP technique where words or phrases (i.e., strings) from a vocabulary are mapped to vectors of real numbers. The need to map strings into vectors of real numbers originated from computers’ inability to do operations with strings.

Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data.

There are several NLP techniques to convert strings into representative numbers, such…

Artificial Intelligence Essentials

Understanding the Major ML Approaches: Supervised Learning, Unsupervised Learning, Semi-Supervised Learning, Reinforcement Learning

Image for post
Image for post
Figure 1. Photo by Adrian Trinkaus on Unsplash

With the constant advancements in artificial intelligence, the field has become too big to specialize in all together. There are countless problems that we can solve with countless methods. Knowledge of an experienced AI researcher specialized in one field may mostly be useless for another field. Understanding the nature of different machine learning problems is very important. Even though the list of machine learning problems is very long and impossible to explain in a single post, we can group these problems into four different learning approaches:

  • Supervised Learning;
  • Unsupervised Learning;
  • Semi-supervised Learning; and
  • Reinforcement Learning.

Before we dive into each of these approaches, let’s start with what machine learning…


Using Natural Language Processing (NLP), Deep Learning, and GridSearchCV in Kaggle’s Titanic Competition | Machine Learning Tutorials

Image for post
Image for post
Figure 1. Titanic Under Construction on Unsplash

If you follow my tutorial series on Kaggle’s Titanic Competition (Part-I and Part-II) or have already participated in the Competition, you are familiar with the whole story. If you are not familiar with it, since this is a follow-up tutorial, I strongly recommend you to check out the Competition Page or Part-I and Part-II of this tutorial series. …


Improving Our Code to Obtain Better Results for Kaggle’s Titanic Competition with Data Analysis & Visualization and Gradient Boosting Algorithm

In Part-I of this tutorial, we developed a small python program with less than 20 lines that allowed us to enter the first Kaggle competition.

However, this model did not perform very well since we did not make good data exploration and preparation to understand the data and structure the model better. In Part-II of the tutorial, we will explore the dataset using Seaborn and Matplotlib. Besides, new concepts will be introduced and applied for a better performing model. Finally, we will increase our ranking in the second submission.

Image for post
Image for post
Figure 1. Sea Trials of RMS Titanic on Wikipedia

Using Jupyter or Google Colab Notebook

For your programming environment, you may choose one of these two options: Jupyter Notebook and Google Colab…

←←← Part 1 | ←← Part 2 | ← Part 3 | DEEP LEARNING WITH TENSORFLOW 2.X — Part 4

Comparing Eager Execution and Graph Execution using Code Examples, Understanding When to Use Each and why TensorFlow switched to Eager Execution | Deep Learning with TensorFlow 2.x

Image for post
Image for post
Figure 1. Eager Execution vs. Graph Execution (Figure by Author)

This is Part 4 of the Deep Learning with TensorFlow 2.x Series, and we will compare two execution options available in TensorFlow:

Eager Execution vs. Graph Execution

You may not have noticed that you can actually choose between one of these two. Well, the reason is that TensorFlow sets the eager execution as the default option and does not bother you unless you are looking for trouble😀. But, this was not the case in TensorFlow 1.x versions. Let’s see what eager execution is and why TensorFlow made a major shift with TensorFlow 2.0 from graph execution.

Image for post
Image for post


Orhan G. Yalçın

I write about artificial intelligence and machine learning. Join hundreds of others to speed up your machine learning journey → eepurl.com/hd6Xfv

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store