Tuesday, October 17, 2017

2. Notes on Machine Learning: Logistic Regression

Unlike regression problems where our objective is to predect a scalar value for a given set of values in features, in classification problems our objective is to decide a discrete value from a limited set of choices. Therefore, we are not going to use Linear Regression in classification problems. Based on how many categories we have to classify our input, we have two types of problems; binary classification and multiclass classification.

In binary classification problems, there are only two possible outputs for the classifier; 0 or 1. Then as it is obvious, multi-class classification problems can have multiple output values such as 0, 1, 2, .. however there's a descete set of values, not infinite. When solving these classification problems, we develop machine learning models which can solve binary classification problems. When we have multiple classes in a problem, we use multiple binary classification models to check for each class.

Logistic Regression:

In order to maintain the prediction output between 0 and 1, we are using a zigmoid/logistic function as the hypothesis function. Therefore, we call this technique, Logistic Regression. Hypothesis function of the logistic regression is as follows.
$$h_\theta (x)=\frac{1}{1+e^{-\theta^T x}}$$
The vectorized representation of this hypothesis would look like the following.
For any input value in \(x\), this logistic regression hypothesis function outputs a value between 0 and 1. If the value is closer to 1, we can consider it as a classification to the class 1. Similarly, if the value is closer to 0, we can consider the classification as 0. On the other hand, we can consider the output value between 0 and 1 as a percentage probability to classify as the class 1. For example, if we receive the output 0.6, it means there's a \(60\%\) probability for the input data to be classified to class 1. Similarly, if we receive the output as 0.3, it means there's a \(30\%\) probability for the input data to be classified to class 1.

Cost Function:

The cost function of the logistic regression is different from the cost function of the linear regression. This cost function is designed in a way where if the logistic regression model makes a prediction with a \(100\%\) accuracy, it generates a zero cost penalty while if it maks aprediction with a \(0\%\) accuracy, it generates an infinite penalty value.
$$J(\theta)=-\frac{1}{m}\sum_{i=1}^{m} [y^i log(h_\theta(x^i)) + (1-y^i) log(1-h_\theta(x^i))]$$
A vectorized implementation of the cost function would look like the following.
$$J(\theta)=\frac{1}{m}(-y^Tlog(h) - (1-y^T)log(1-h))$$

Gradient Decent:

In order to adjust the parameter vector \(\theta\) untill it fits properly to the training data set, we need to perform gradient decent algorithm. Following line has to be repeated for each \(j\) simultaneuously which represent the parameters \(\theta\). In this gradient decent algorithm, \(\alpha\) is the learning rate.
$$\theta_j=\theta_j - \frac{\alpha}{m}\sum_{i=1}^{m}(h_\theta(x^i) - y^i)x_j^i$$
A vectorized implementation of the gradient decent algorithm would look like the following.
$$\theta = \theta - \frac{\alpha}{m}X^T(g(X\theta) - y)$$

Multiclass Classification:

The above technique works for classifying data into two classes. When we encounter a multiclass classification problem, we should train a logistic regression model for each class. For example, if there are 3 classes, we need three logistic regression models trained to distinguish between the targetted class and the others. In this way, we can solve  multiclass classification problems using logistic regression.

Overfitting and Underfitting:

Overfitting and Underfitting are two problems which can occur in both Linear Regression and Logistic Regression algorithms. The former problem occurs when our model fits too accurately to the training data set so that in does not represent the general case properly. The latter issue occurs when our model does not even properly fit to the training data set.

Regularization is a nice technique to solve the problem of overfitting. What happens there is we maintain the values of \(\theta\) parameter vector in a smaller range in order to stop the learning model curve from adjusting too agressively. This is achieved by adding extra weights to the cost function. It prevents the medel from overfitting to the training dataset. We can use this technique in both linear regression and logistic regression.

Regularized Linear Regression:

The gradient decent algorithm for the linear regression after adding regularization looks like the following. The two steps has to be repeated in each step of the gradient decent. There, \(j\) stands for 1, 2, 3, .. which represent each \(\theta\) parameter in the hypothesis. \(\lambda\) is the regularization parameter.
$$\theta_0 = \theta_0 - \alpha \frac{1}{m} \sum_{i=1}^{m}(h_\theta(x^i) - y^i) x_0^i$$
$$\theta_j = \theta_j - \alpha [(\frac{1}{m} \sum_{i=1}^{m}(h_\theta(x^i) - y^i) x_j^i) + \frac{\lambda}{m} \theta_j]$$

Regularized Logistic Regression:

The gradient decent algorithm for the logistic regression after adding regularization looks like the following.
$$\theta_0 = \theta_0 - \alpha \frac{1}{m} \sum_{i=1}^{m}(h_\theta(x^i) - y^i) x_0^i$$
$$\theta_j = \theta_j - \alpha [(\frac{1}{m} \sum_{i=1}^{m}(h_\theta(x^i) - y^i) x_j^i) + \frac{\lambda}{m} \theta_j]$$
Of course, it looks like the regularized linear regression. However, it is important to remember that in this case, the hypotheis function \(h_\theta (x)\) is a logistic function unlike in the linear regression.


Thursday, October 12, 2017

Ireland's Space Week in 2017

Image credit:
Last week in Ireland is called Space Week where the focus was on promoting space exploration and related science and technologies among people. This time for Ireland is so special because they are building a cube satellite with the joint contribution of universities and Irish space technology companies. During this space week, there were so many events organized by different institutions all over the Ireland. Even though I was busy with my work, somehow I finally managed to attend to an interesting event organized by the University College Dublin.

The event was titled Designing for Extreme Human Performance in Space which was conducted by two very interesting personalities. The first person was Dava J. Newman, who is a former deputy administrator of NASA and currently works for the MIT. The second person was Guillermo Trotti, who is a professional architect and has worked for NASA on interesting projects. Seeing the profiles of these two speakers attracted me to attend to the event. The session was held for about an hour and a half where the two speakers shared the time to talk on two different areas they are interested in. Finally, the session was concluded with a Q&A session.

Image credit:
In her presenation, Dava talked about the extreme conditions in space which raise the requirement of designing life support systems to assist astronauts. When she asked from the famous astronaut Scott Kelly (@StationCDRKelly), who spent a year in ISS, about what would be the most needed thing if we are to improve in space technology, he has responded that life support systems to ease the operation of astronauts on space is the most needed thing. Dana presented the work she is involved in designing a new kind of space suit for astronauts to use on other planets such as Mars. The pictures she showed indicates a skin-tight suit which is custom designed to the body specification of an astronaut very much like a suit from a sci-fi movie.

Gui Trotti in his presentation talked specifically about his architectural interest on building habitable structures for humans on the Moon and Mars. As a professional architect, he is so inspired to bring his skills into human colonies in outer space. During that presentation, his mentioned three things that inspired me so much. The first is the fact that when an astronaut goes to space and turn back to look at his home planet, all the borders and nationalistic pride goes away and comes the feeling of we all are one human race and that planet Earth is the only home we have. Secondly, he described his tour around the world in a sailing boat which reminded him that space exploration is another form of human courage to explore and see the world. Finally, he said that his dream is to build a university on the moon one day to enable students from the Earth to visit and do research appreciating our home planet.

During the Q&A session, a lot of people asked interesting questions. Among those, one question was about the commercialization of space. They responded with the important fact that there is a potential of performing commercial activities such as manufacturing on space, especially the things which can be done easily on zero gravity environments rather than on the surface of the Earth. Various things such as growing food plants and 3D printing have been tried on the ISS towards this direction. In the near future such as a decade along the line, we would be able to see much more activities from the private sector on space than today. They are so positive about the progress in this area.

Even though I'm not working in anywhere related to space exploration,  I'm always fascinated by this topic and I will continue to be.


Thursday, October 5, 2017

1. Notes on Machine Learning: Linear Regression

Machine Learning is a hot topic these days since various kinds of applications rely on Machine Learning algorithms to get things done. While learning this topic, I will be writing my own notes about it as article serious in this blog for my own future reference. The contents might be highly abstract as this is not a tutorial aimed at somebody to learn Machine Learning by reading these notes.


According to the definition of Tom Mitchell, Machine Learning is defined as  "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." Basically, we are enabling computers to do things without explicitly programming them to do it.

Categories of Machine Learning:

There are two broad categories of Machine Learning algorithms. The first is Supervised Learning and the second is Unsupervised Learning. There are various sub categorizations under these categories which can be illustrated as follows.

Machine Learning Models:

A model is a function (ie. hypothesis) \( h(x) \) which provides the output value \( y \) for a given input values \( x \), based on previously given leaning dataset \( X \) and output set \( Y \). The input values \( x \) are called features. A hypothesis function in Linear Regression with one feature would look like the following.
$$ h(x) = \theta_{0} x_{0} + \theta_{1} x_{1} $$
The first feature \( x_{0} \) is always set to 1 while the second feature \( x_{1} \) is actually the feature used in this model. The parameters \( \theta_{0} \) and \( \theta_{1} \) are the weight of the features to the final output and therefore, they are the values we are looking for in order to build the linear regression model for a specific dataset. The reason why we have an extra feature in the beginning which is always set to 1 is that it is easy to perform vecterized calculations (using matrices based tools) when we have it in that way.

Cost Function:

In order to measure the accuracy of a particular hypothesis function, we use another function called cost function. It is actually, a squared mean error function between the difference of predicted output value and the true output value of the hypothesis. By adjusting the values of the parameters \( \theta_{0} \) and \( \theta_{1} \) in the hypothesis, we can minimize the cost function and make the hypothesis more precise.
$$J(\theta_{0},\theta_{1}) = \frac{1}{2m} \sum_{i=0}^{m} (h_\theta(x_i) - y_i)^{2}$$

Gradient Descent:

In this algorithm, what we are doing is keep adjusting the parameters \( \theta_{0} \) and \( \theta_{1} \) until the Cost Function evetually becomes the minimum it can get. That means, we found the most accurate Model for the training dataset distribution. So, in order to adjust the parameters \( \theta_{0} \) and \( \theta_{1} \), we perform the following step over and over again for \( \theta_{0} \) and \( \theta_{1} \). In this equation, \( j \) is 0 and 1 in two steps.
$$\theta_{j} = \theta_{j} - \alpha \frac{\partial }{\partial \theta_{j}} J(\theta_{0},\theta_{1})$$


Friday, September 29, 2017

Upgrading Octave to the Latest Version on Ubuntu

These days, I'm following the machine learning course in Coursera conducted by Andrew Ng. In this course, the most important thing is the exercises. Since I'm on Linux platform with an Ubuntu 16.04 version, I installed Octave for this work as follows.

sudo apt-get update
sudo apt-get install octave

The latest Octave version on Ubuntu 16.04 repositories is 4.0.0. However, while working on the exercise of this course, I realized that this version has some problems which make it unsuitable to use with the sample Octave scripts provided in the course. The advice was to use a higher version of Octave than 4. In order to do that, we have to add a new PPA to the system as follows.

sudo add-apt-repository ppa:octave/stable
sudo apt-get update

Now, let's remove the whole previous Octave version and install from the beginning with the new PPA.

sudo apt-get remove octave
sudo apt-get autoremove
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install octave

Now, the version we get is the latest version in the new PPA. You can check it by first starting Octave and typing "version" on the prompt. The version I got was Octave 4.2.1.


Friday, September 22, 2017

"AI and The Future" by Margaret Boden

Last Tuesday, I attended to an interesting guest talk by Professor Margaret Boden from University of Sussex under the topic AI and The Future. Even though I'm not much into Artificial Intelligence topic, I decided to attend to this session simply driven by curiosity. Most importantly, this is the first session of its kind I'm attending in UCD and therefore I just wanted to see how guest talks are going to be here. The talk was delivered at the Moore Auditorium in UCD Science Center.
In her talk, she mostly talked about the implications of AI systems in our everyday life and what she believes as the things to be worried about. The concern she pointed out in her talk is that, when AI systems replace human presence from certain activities, we tend to receive bad results and bad long term impacts on our society even though AI application can seem to be useful and makes life easier. In order to support her point, she took a broad range of examples including autonomous drones, AI systems to take care of elderly and kids.

Even though I have a habit of posing a question when I attend to sessions like these, unfortunately, I couldn't ask a question in this session. They were running out of time and many people wanted to ask questions. However, I feel that it's worth the time I invested on attending this guest talk. I will hopefully attend to the future sessions organized by them.

Wednesday, August 16, 2017

Demodulating AM Signals using GNURadio

I have been using GNURadio for different purposes for some time. However, in most of those times, I used either a very simple flow graph created by myself or a complicated flow graph created by somebody else. Therefore, in the case of complicated flow graphs, I didn't even bother to explain how they work as I didn't really understand their internal functionality. Yesterday, I came across a need to demodulate a signal captured from an RTL-SDR and saved to a file. From the output I observed through the GQRX tool, the signal is modulated using Amplitude Modulation (AM). I wanted to implement a demodulator by myself and therefore I searched the web for a tutorial regarding this kind of stuff. I found a tutorial given in the references section which was really useful [1].

In the rest of this post, I will be explaining how this AM demodulation flow graph is implemented and the functionality of each block according to my understanding. There can be missing details and inaccurate things. If you notice some glitch, please be kind to let me know by leaving a comment. Here we go.

(1) The signal I'm trying to demodulate is in 32MHz frequency and therefore, I tuned an RTL-SDR dongle to that frequency and captured data to a file with a sample rate of 1.8MHz. The tool I used for this purpose is a command line tool called osmocom_fft which comes in handy when we need to save captured data into the .cfile format.

(2) Let's start with a File Source block in the flow graph. Double click on it and then browse to select the source file which contains data. Set the Repeat option to yes.

(3) The data in the file is recorded in 1.8MHz sample rate and the samples are centered to the 32MHz signal. If we draw an FFT spectrogram from the raw data of the file, we will see the X-axis from (32-1.8)MHz to (32+1.8)MHz. However, the real AM signal does not span such a width. The real signal was within about 5kHz width (not exactly). Therefore, let's narrow down our data scope to somewhere about 10kHz in order to focus on our signal. To narrow down our spectrum information from 1.8MHz width to 10kHz, we need to resample the data. So, let's use a Rational Resampler block as the next step.

In the Rational Resampler block, we have two important parameters; Interpolation and Decimation. Interpolation value represents the number by which the input signal's sample rate is multiplied. Decimation value represents the number of which the input signal's sample rate is divided. In order to resample a 1.8MHz signal into the 1kHz signal, we have to divide the input by 180 and multiply by 1. That means, set the Interpolation to 1 and Decimation to 180 in the Rational Resampler block.

Here, instead of directly putting the resampling value 18 in the field, let's make the value a variable so that later we can change it easily. Therefore, add a variable called resamp_factor and set the value to 180.

(4) It is important to note that the purpose of the Rational Resampler is to change the sampling rate of some data. The output of the current setup is a signal with a sample rate of 10kHz. However, to actually isolate the signal we need to extract, we should filter out the other frequencies and keep the target frequency. This is where we need a filter. The target signal is about 5kHz wide. And it is centered at the 35MHz. Therefore we need to add a Low Pass Filter block with a cutoff frequency at 5kHz. Any signals which contain more than 5kHz components will be removed by this block.

When setting the sample rate of the Low Pass Filter block, it is important to set it to the value (samp_rate / resamp_factor) since, the sample rate is now changed by the previous block, the Rational Resampler.

(5) Now we are focused to the exact signal we need to demodulate. Since we are interested in amplitude modulation (AM), we are interested in the amplitude of the signal. We can convert the complex data we were receiving out of the filter to a series of numbers which represent the amplitudes using a Complex to Mag module. Now the output of this module is basically the demodulated base band signal.

(6) It is useful to have a way to increase and decrease the amplitude of the demodulated signal in order to listen to it conveniently. In order to achieve that, we add a Multiply Const block. In the constant field, instead of adding an exact number, we put a name called volume. Then, we should add WX GUI Slider block separately. Set the ID of this new block to be volume. Set the minimum value 0 and maximum value 10 with a number of steps set to 100. Now, when we change the volume in the WX GUI slider, it will change the value in the Multiply Cont block.

(7) Before we add an audio sink to listen to the sounds, we need one more thing. Since an audio sink would need the sample rate to be around 48kHz, let's resample the sample rate of the current setup from 10kHz to 48kHz by using another Rational Resampler. Set the decimation to 10 and interpolation to 48 for the obvious reasons.

(8) Now the last thing. Add an Audio Sink block and set the sample rate to 48kHz. If you think it's useful, add a WX GUI Scope Sink which will show the waveform of the audio signal being heard. The sample rate of that block too has to be 48kHz.

Run this flow graph and listen to the output sound. You should be able to hear the demodulated signal. In case you need to take a look at the original GNURadio Companion based file I created for the above AM demodulator, download the GRC file in the GIST given in this link.  Cheers!



Sunday, August 6, 2017

Memories of Staff Development Program at University of Kelaniya

While being a lecturer in University of Colombo School of Computing, I had a requirement of following a staff development program as one of the conditions to get confirmed in my position. Even though I have expertized in my own field of study, there's no doubt that I'm not a complete person as a lecturer. How to properly involve in the teaching and learning process as a lecturer is something I have to learn separately. The university grants commission (UGC) of Sri Lanka has accredited several staff development programs (SDC) offered in different universities in Sri Lanka. Among these, I selected the SDC program in University of Kelaniya for me.

Since, I did my bachelors degree in University of Colombo and joined the staff of the same university, I had never got a chance to participate to a course in another university in Sri Lanka as a student. I have been to University of Sri Jayawardanepura, but as a visiting lecturer, not as a student. Therefore, me joining the SDC program in University of Kelaniya is the first time I played the student role in another university. Prof. Dilkushi Wettewe who was the director of the SDC program then, assisted me so much when I contacted her to get further details. Even though University of Kelaniya is located a little bit far away from University of Colombo, I considered it has a positive change to visit a new place and be a student. It indeed became a whole new experience.

The course was scheduled to be in every Friday and the course goes from the morning to the evening across the whole day. The participants to the course included new lecturers from various universities in Sri Lanka such as University of Kelaniya, University of Vocational Technology and Bhikkhu University Anuradhapura. I was the only participant from University of Colombo. Before attending to the course, I never realized that I have so much to learn about teaching and learning process as a lecturer. The course starts with the introduction to the university system in Sri Lanka and how the parliamentary acts have formally established the universities in Sri Lanka. Throughout the course, we studied various aspects of university education. The most exciting thing I came across by attending to the SDC program is the participants I got to know there. I believe that the contacts I made with the people will remain for the rest of my carrier.

At the last day of the course, we had a little ceremony to award the certificates to the attendees of the course where I performed the vote of thanks. I was lucky to receive a best portfolio award which was awarded only to 5 participants of the course. I'm really happy about the opportunity I got to take part in the program.