Skip to main content

Posts

Showing posts from 2017

Facial Landmark Detector

Dlib is a popular library. It can be used for face detection or face recognition. In this article I will use it for facial landmark detection. Facial landmarks are fecial features like nose, eyes, mouth or jaw. Start with installing Dlib library. Dlib requires Lib Boost. sudo apt-get install libboost-all-dev Now we can install Dlib. sudo pip install dlib Following example uses PIL and numpy packages. Instead of Pillow it is possible to use skimage package. pip install Pillow pip install numpy Note that in order to detect facial landmarks, a previously trained model file is needed. You can download one from Dlib site. Download model file from this link : http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 After download is completed, extract the archive and make sure its location is correctly referenced in the source file. The application first tries to detect faces in the given image. After that for each face it tries to detect landmarks. For

Prepare a Ubuntu System for Deep Learning

An Ubuntu Deep Learning System A.                     Install latest Nvidia drivers 1-       Run following  commands  to add latest drivers from PPA. sudo add-apt-repository ppa:graphics-drivers/ppa sudo apt update 2-       Then use Ubuntu  Software &Updates  Additional Drivers application to update your driver. For my GTX-1070 I chose driver with version 384.69. 3-       After installation Restart your PC. You may need to disable safe boot using bios menu. 4-       Run following command to ensure that drivers are installed correctly. lsmod | grep nvidia 5-       Ä°f you have issue with the new driver remove it with following command. sudo apt-get purge nvidia* For more information see: https://askubuntu.com/questions/851069/latest-nvidia-driver-on-ubuntu-16-04 http://www.linuxandubuntu.com/home/how-to-install-latest-nvidia-drivers-in-linux B.                      Install  Cuda  Toolkit 1-       Download  Cuda  Toolkit from following url: h

Java Custom ClassLoader

Problem I want my previously written java application to be run multiple times by a shell script with calculated parameters. However, target platforms may not contain bash. Thus I decied to write another class which acts like a bash script file and launch my application with generated parameters. My application has static variables and static classes. Before every launch I want the state information be cleared. But since I work in the same JVM, static objects will not be removed. It comes out using static fileds and initializers may not be a good approach. There should be a way to solve this.  The solution is to use a custom classloader. Classloader will load all classes. At the end of the execution, the classloader and all the classes loaded will be garbage collected. At next execution, all classes will be reloaded. Solution : Custom Class Loader Class loaders can be considered a container to launch an application. Servlet containers like Tomcat uses a custom classloade

Hadoop Cluster Installation Document

This document shows my experience on following apache document titled “ Hadoop Cluster Setup ” [1] which is for Hadoop version 3.0.0-Alpha2. This document is successor to Hadoop Installation Document-Standalone  [2]. “ ubuntul_hadoop_master ” machine is used in the rest of the machine. You will need to read and follow Hadoop Installation Document-Standalone [2] before reading any further. A. Prepare the guest environments for slave nodes. It is easy to clone virtual machines using Virtualbox. Right click “ ubuntul_hadoop_master ” and clone. Name new VM as “ ubuntul_hadoop_slave1 ”. You can have as many slaves as you like. Since we simply clone the master machine, much of the configuration comes ready. Practically slave nodes needs more disk space while master node needs more memory. But this is an educational setup and these details are not necessary. B. Install Hadoop Hadoop comes installed with " ubuntul_hadoop_master ”. C. Running Cluster In maste

Hadoop Installation Document - Standalone Mode

This document shows my experience on following apache document titled “Hadoop:Setting up a Single Node Cluster”[1] which is for Hadoop version 3.0.0-Alpha2 [2]. A. Prepare the guest environment Install VirtualBox. Create a virtual 64 bit Linux machine. Name it “ubuntul_hadoop_master”. Give it 500MB memory. Create a VMDK disc which is dynamically allocated up to 30GB. In network settings in first tab you should see Adapter 1 enabled and attached to “NAT”. In second table enable adapter 2 and attach to “Host Only Adaptor”. First adapter is required for internet connection. Second one is required for letting outside connect to a guest service. In storage settings, attach a Linux iso file to IDE channel. Use any distribution you like. Because of small installation size, I choose minimal Ubuntu iso [1]. In package selection menu, I only left standard packages selected.  Login to system.  Setup JDK. $ sudo apt-get install openjdk-8-jdk Install ssh and pdsh, if not already i

Spring Boot Rest Application

It is very easy to create a rest service. We even do not need a servlet container to run the application. Spring Boot will prepare a container for us. We only need to add what is necessary. Even a web.xml file is not necessary. This rest service searches a text in elasticsearch and returns the result as JSON. Amount of code needed for this task is amazingly low. First class is the controller. Request will be served with this controller. For elasticsearch repository see [1]. Now comes the spring boot application entry point. In order to use the rest service use the request like this: http://localhost:8080/recipe/by-name?name=mynameis For complete code listing in github see [2]. 1.  Spring Boot Elasticsearch Application 2.  Github - essync

Spring Boot Elasticsearch Application

Assume you have large amount of text data and want to do search on your data. You decided to use elasticsearch or solr as search engine. Spring Data is your friend. This document shows an example application that uses elasticsearch with spring data. This application uses elasticsearch and JPA at the same time. Data comes from mysql and goes to elasticsearch. Create document class to represent your domain object. Next thing you need is repository class to manage your domain object. Query methods are possible. Now we can work store data to elasticsearch using repository object. For JPA entity and repository classes see [1]. Component annotation makes this class spring managed. Autowired annotation tells spring to create required repository objects. Transactional annotation is required for declarative transaction management. Now spring boot application entry point comes. EnableElasticsearchRepositories is need to activate elasticsearch repository. Since another repository is

Spring Boot JPA Application

Spring framework comes with many modules that makes java developers' life easier. Spring Boot works like a charm. It has never been easier to write java applications. This document is created to show how simple a JPA application can be written. A JPA application starts with creating JPA entity and DAO classes. JPA entity classes can be generated using a tool that comes with any Java IDE. In spring data, DAO classes are called as Repository classes. A repository or DAO object is used to save,update,delete or retrieve entity objects. Traditionally DAO classes were created using Base Dao classes. Base classes were saving us from writing boiler plates for save, update or delete methods. But when it comes to retrieval methods, boiler plate codes were unavoidable. Good news is that, spring data comes with a good idea. Query methods save us from boiler plates. See [1] for more details. Entity class is a standard JPA entity. An example service class is given below. By use of A

Do I need to switch to Git from SVN

In recent years GIT has had great momentum it is increasing its popularity. More and more teams are switching to GIT from SVN. This fact raises the question of should I switch to Git from SVN. To answer this question I made internet investigation relying on poplar search engines. I looked at the user reviews, considered their arguments. Then I started to use Git in parallel with SVN.  Here comes my experience with both of them. Both of them are great version control systems. Most apparent difference is that Git is decentralized which means you do not have to use a single repository at the center.   I found SVN is better in simplicity. Every body easily starts using SVN. It is workflow is quiet simple: update, modify, commit. Revision number logic in SVN is great to have. I use version number to name jar files, This way I can easily follow which jar corresponds to which version.  I found Git is better for not needing constant connection to central repository. I can f