navigator.geolocation callback function not fired on Mac Safari

It turned out that Mac Safari browser do not support geolocation if it has wired connection. Problem sovled once I turned on the wifi.

Gotcha!!!

Build Hadoop Native Libraries

How to build Hadoop Native Libraries for Hadoop 2.2.0

Because the distributed Hadoop 2.2.0 provides a 32bit libhadoop by default, user has to build the native libraries to avoid those warning messages such as, disabled stack guard of libhadoop.so.

The official Hadoop websitehttp://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-common/NativeLibraries.html gives completely unclear instructions on how to build Hadoop native libraries.

So here are what you should do:

You need all the build tools:

Another prerequisite, protoco buffer: protobuf version 2.5, which can be downloaded from https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz. Download it to the /tmp directory; then,

Having all the tools, we can now build Hadoop native libraries. Assuming you have downloaded the Hadoop 2.2.0 source code, do:

Note: there is a missing dependency in the maven project module that results in a build failure at the hadoop-auth stage. Here is  the official bug report  and fix is

 

Maven will do all the heavy work for you, and you should get this after build is completed

The built native libraries should be at

 

 

MapReduce group by example

MapReduce Group By Example: Grouped Statistics of Airline On-time Performance Dataset

GitHub repo:

https://github.com/drweiwang/BigData/tree/master/grpstats

About the Dataset

Using the airline on-time performance dataset as input data. We are only intersted in the UniqueCarrier and ArrDelay columns in the dataset.

More information about the dataset can be found here: http://stat-computing.org/dataexpo/2009/

Find the maximum arrival delay gourped by airlines
MapReduce strategy
Map Phase

Mapper simply parsing the data line by line. Extract the fields of UniqueCarrier and ArrDelay. Write keys and values, where key is the UniqueCarrier and value is the numeric delay in IntWritable type.

Reduce Phase

Reducer iterate through the list of all delays associated with the key(one airline) and update the maximum value. Finally, the reducer writes the final maximum value out with the key (airline code)

Add menu item on Debian

Create Menu Item

The menu item are stored in two places:

  1. /usr/share/applications  directory which is accessible by every one.
  2. ~/.local/share/application s directory which is accessible to a single user.

The menu item is stored as a .desktop file. The file should be UTF-8 coded and resemble the following example which adds the Google chrome item to the application menu.

 Line by line explanation

Line Description
[Desktop Entry] The first line of every desktop file and the section header to identify the block of key value pairs associated with the desktop. Necessary for the desktop to recognize the file correctly.
Type=Application Tells the desktop that this desktop file pertains to an application. Other valid values for this key are Link and Directory.
Encoding=UTF-8 Describes the encoding of the entries in this desktop file.
Name=Sample Application Name Names of your application for the main menu and any launchers.
Comment=A sample application Describes the application. Used as a tooltip.
Exec=application The command that starts this application from a shell. It can have arguments.
Icon=application.png The icon name associated with this application.
Terminal=false Describes whether application should run in a terminal.

If your application can take command line arguments, you can signify that by using the fields as shown below:

Add… Accepts…
%f a single filename.
%F multiple filenames.
%u a single URL.
%U multiple URLs.
%d a single directory. Used in conjunction with %f to locate a file.
%D multiple directories. Used in conjunction with %F to locate files.
%n a single filename without a path.
%N multiple filenames without paths.
%k a URI or local filename of the location of the desktop file.
%v the name of the Device entry.

Official Specification

Desktop Entry Specification:
http://standards.freedesktop.org/desktop-entry-spec/latest/index.html

How to Install Hadoop 2.2.0 on Debian

Apache Hadoop website gives instructions on how to install Hadoop 2.2.0. However, as most open source projects, it is not well documented. Some of the instructions simply does not work. I have spent quite a lot of time figuring out how to install Hadoop version 2.2 on my Linux Debian 7 wheezy machine. Here are the steps I took:

Step by step instructions to install Hadoop 2.2.0 on Debian wheezy Linux

Prerequisites

  1. Linux OS. I use Debian 7 wheezy
  2. SSH server. $ sudo apt-get install openssh-server
  3. Java JDK.If you don’t have it, follow this instruction: https://wiki.debian.org/JavaPackage. I installed the Sun Java JDK 1.7 that downloaded from Oracle.
    1. Add a “contrib” component to /etc/apt/sources.list, for example:
    2. Update the list of available packages and install the java-package package:
    3. Download the desired Java JDK/JRE binary distribution (Oracle). Choose tar.gz archives or self-extracting archives, do not choose the RPM!
    4. Use java-package to create a Debian package, for example:
    5. Install the binary package created:

    By default the DebianAlternatives will automatically install the best version of Java as the default version. If the symlinks have been manually set they will be preserved by the tools. The update-alternatives tools try hard to respect explicit configuration from the local admin. Local manual symlinks appear to be an explicit configuration. In order to reset the alternative symlinks to their default value use the --auto option.

    If you’d like to override the default to perhaps use a specific version then use --config and manually select the desired version.

    Choose the appropriate number for the desired alternative.

    The appropriate java binary will automatically be in PATH by virtue of the /usr/bin/java alternative symlink.

    You may as well use the update-alternatives tool from java-common package which let you update all alternatives belonging to one runtime or development kit at a time.

Add Hadoop Group and User:

Setup SSH Certificate for password-less login

Download Hadoop 2.2.0

Or you can compile and build Hadoop 2.2.0 from source. See the build instructions for details:

http://svn.apache.org/repos/asf/hadoop/common/trunk/BUILDING.txt and my other post on how to build native Hadoop libraries: http://drweiwang.com/build-hadoop-native-libraries/

Setup Hadoop Environment Variables

Append the following code to the end of your shell config profile ~/.bashrc

Now, you need to modify the $JAVA_HOME value in the hadoop-env.sh shell script, which is the required environment variable.

Find the line that has  export JAVA_HOME , which is first export statement in the hadoop-env.sh file; and change the value to:

Now, Hadoop should be installed and let’s log back as user, hadoopuser, and check the Hadoop version

 

Configure Hadoop

Change the core-site.xml configuration, which defines the HDFS file server

Paste the following between the  <configuration></configuration> tag:

Modify the yarn-site.xml by adding the following between the   <configuration></configuration>   tag:

Modify the mapred-site.xml.template. First rename it to mapred-site.xml:

and then, add the following code inside the <configuration> tag:

Now, we need to configure the HDFS. Make two directories for the namenode and datanode:

Paste following between <configuration> tag

The namenode needs to be formatted first:

 Start Hadoop Service

Everything should be all set. We can now start the Hadoop services. The old shell commands start-all.sh  and stop-all.sh  of version 1.2 has been superseded by  start-dfs.sh  and start-yarn.sh  in version 2.2.0.

If everything is successful, you should see the following java processes running:

Run Hadoop Example

K-means++ initialization algorithm

K-means clustering is a widely used clustering technique that seeks to minimize within-cluster sum of distance. Solving the problem exactly is computationally difficult, but Lloyd [6] proposed a local search solution, often referred to as “k-means” algorithm or Lloyd’s algorithm, that is still widely used today. It is the speed and simplicity of the k-means method that make it popular, not its accuracy. The algorithm can converge to a local optimum that can be very away from the global optimum (even under repeated random initializations). A typical example is given in Figure 1. In this example, k-means algorithm converges to a local minimum after five iterations, which contradicts the obvious cluster structure of the data set.Figure 1: A typical example of the k-means convergence to a local minimum. The result of kmeans clustering (the rightmost figure) contradicts the obvious cluster structure of the data set. The small circles are the data points, the four ray stars are the centroids (means). The initial configuration is on the leftmost figure. The algorithm converges after five iterations presented in the figures, from the left to the right. A proper initialization of k-means is crucial to obtain a good final solution. Arthur and Vassilvitskii proposed the k-means++ initialization algorithm that both improves the running time of Lloyd’s iterations and the quality of the final solution. Our Statistics Toolbox already offers the kmeans function which implements the Lloyd’s algorithm. We can improve the performance of the kmeans function by adding thek-means++ seeding to the initialization step. There are many other initialization methods for k-means, for example, K-means Refinement algorithm presented in and some others. I chose k-means++ because it is simple and can be extended in the MapReduce framework. The k-means refine algorithm can also be easily implemented in a MapReduce framework, but the initialization is rather complicated and has a non-empty clustering constraints. Although, the refinement method is salable and very suitable for big data, for small datasets, the run time can be much longer than the standard random sampling or the k-means++ algorithms due to the complexity and constraints.

 

Kmeans_local_minimum