Wednesday, November 9, 2016

Hardening Firefox for Privacy and Security

Based on this guide, I decided to perform some adjustments in Firefox to protect my privacy and enhance security while browsing. After following the steps, my add-ons list now looks like the image below. I can use use tor-browser but I prefer to have more control on the browser settings.




Tuesday, November 8, 2016

Computer Science (CS), Information Technology (IT), and Information Systems (IS) in Philippine HEI

(Source: CHED CMO 25 Series 2015)

Bachelor of Science in Computer Science

The BS Computer Science program includes the study of computing concepts and theories, algorithmic foundations and new developments in computing. The program prepares students to design and create algorithmically complex software and develop new and effective algorithm for solving computing problems.

The program also includes the study of the standards and practices in Software Engineering. It prepares students to acquire skills and disciplines required for designing, writing and modifying software components, modules and applications that comprise software solutions.

(Topics on databases: Storage Structures, Relational Algebra, Query Optimization)
(Topics on networks: CSMA-CD, TCP Congestion Control)

Bachelor of Science in Information Technology

The BS Information Technology program includes the study of the utilization of both hardware and software technologies involving planning, installing, customizing, operating, managing and administering, and maintaining information technology infrastructure that provides computing solutions to address the needs of an organization.

The program prepares graduates to address various user needs involving the selection, development, application, integration and management of computing technologies within an organization.

(Topics on databases: Setup of a RDBMS such as Oracle, CRUD using SQL, User Access Management)
(Topics on networks: LAN Setup, Network Management)

Bachelor of Science in Information Systems

The BS Information Systems Program includes the study of application and effect of information technology to organizations. Graduates of the program should be able to implement an information system, which considers complex technological and organizational factors affecting it. These include components, tools, techniques, strategies, methodologies, etc.

Graduates are able to help an organization determine how information and technology-enabled business processes can be used as strategic tool to achieve a competitive advantage. As a result, IS professionals require a sound understanding of organizational principles and practices so that they can serve as an effective bridge between the technical and management/users communities within an organization. This enables them to ensure that the organization has the information and the systems it needs to support its operations.

(Topics on databases: Student Academic Information Systems)
(Topics on networks: E-commerce)

Further Reading

  • http://www.innovators.edu.pk/node/233

Wednesday, October 26, 2016

NCITE 2016 Experience

This year's NCITE was held at DMC College, Dipolog City, Zamboanga del Norte. I presented a paper on OSv-MPI, part of my PhD project, under the IT track. The best papers for CS, IT, and IS came from ADMU, MSU-IIT, ADNU respectively. There were a lot of participants from the different regions. This was also the first time I attended a conference without the company of ICS colleagues. It was fun and enjoyable.

Wednesday, October 19, 2016

Packing light for a three-day academic conference

Being an academic involves a lot of traveling especially to attend a scientific or academic conference. It is best to minimize the things to bring. Described here is what I usually pack for a three-day conference. My main objective is to avoid checking in luggage. I usually bring three bags, a carry-on,a backpack, small crossbody bag.

Carry-on bag

Shirts
  • 3 dress shirts (short, three-fourths, or long sleeves)
  • 3 undershirts
  • 2 polo shirts
  • 1 t-shirt
Pants/Shorts
  • 3 trousers
  • 1 walking shorts
  • 1 sleeping shorts
Shoes/Slippers
  • 1 pair of casual shoes
  • 1 pair of black leather shoes
  • 1 pair of casual slippers 
Underwear
  • 4 pieces underwear
  • 3 pairs of socks
  • 3 hankerchiefs
Backpack

Accessories
  • 1 small-medium towel
  • 1 light jacket/sweater
  • 1 watch
  • 1 belt
 Others
  • Laptop with charger
  • Cellphone charger
  • Notebook and Pen
  • Instant coffee
  • Broadband stick
  • Paper Holder
 Crossbody bag
  • Wallet
  • ID
  • Cellphone
  • Cash
  • ATM card
  • Travel documents
  • Point-and-click camera
 
What to wear

Going to the venue(day before the conference):  polo shirt, trousers, and casual shoes

Conference day 1: dress shirt, undershirt, trousers, leather shoes

Conference day 2: dress shirt, undershirt, trousers, leather shoes

Conference day 3: polo shirt, trousers, casual shoes

Going home: same as Conference day 3

Touring around after each day: t-shirt, casual shorts, casual slippers

Sleeping: undershirt and sleeping shorts

Thursday, September 29, 2016

Setting up a Drone Programming Environment using DroneKit on Ubuntu 16.04

This guide describes how to setup an environment for programming drones using DroneKit. The main requirement is an Ubuntu 16.04 box. Using this environment will allow testing of code before they are run on an actual drone. To run the code on a drone, simply update the connection parameters.

The procedure will require four terminals which will be used for different purposes. In the steps that follow, SRG-Bots is the working directory.

Terminal Zero - Used for installing the required packages 

$cd ~/Downloads
$wget  http://firmware.eu.ardupilot.org/Tools/APMPlanner/apm_planner2_2.0.23_ubuntu_xenial64.deb
$sudo dpkg -i apm_planner*.deb
$sudo apt-get -f install
$sudo dpkg -i apm_planner*.deb
$sudo apt-get install python-pip
$pip install virtualenv
$mkdir SRG-Bots
$cd SRG-Bots
$virtualenv dronekit_env
$source dronekit_env/bin/activate
$pip install dronekit dronekit-sitl mavproxy

 
Terminal One -  and running the simulator

$cd SRG-Bots
$source dronekit_env/bin/activate
$dronekit-sitl copter

In case the initialization is taking too long, you can reset the simulation.
$dronekit-sitl--reset


Terminal Two - Used to allow multiple connections on the drone

$cd SRG-Bots
$source dronekit_env/bin/activate 
$mavproxy.py --master tcp:127.0.0.1:5760 --sitl 127.0.0.1:5501 --out 127.0.0.1:14550 --out 127.0.0.1:14551

Terminal Three - Used to run APM Planner


$apmplanner2

Go to the menu, then select Communications -> Add Link -> UDP. Set the UDP Port field to 14550. Connection to the copter will be established.

Terminal Four - Used to run programs

$cd SRG-Bots
$source dronekit_env/bin/activate
$wget https://github.com/dronekit/dronekit-python/raw/master/examples/simple_goto/simple_goto.py
$python simple_goto.py --connect "udp:127.0.0.1:14551"

You can observe the behavior of the drone on the APM Planner as the code gets executed.


Monday, August 29, 2016

Drone 101

We have been lucky to be awarded with some funds to buy some drone kits. Since July, I spent my weekends playing around with the Erle-Copter assigned to me which I named Red-SRG-Bot. This kit is powered by Linux and therefore opens up a lot of possibilities programmatically. I have a presentation that documents my activities in this area and another for an extension work. Below is a video of one of our outdoor flight tests.

Monday, August 1, 2016

Writing the abstract for systems papers

(Last update: 17 January 2023)

Writing an abstract for systems papers is easy if you know what to put in it. Begin the abstract with "This paper presents the design, implementation, and evaluation of ..", then add the following.

1. Catchy title for the system. (acronym, one word)
2. General description of the system and how it addresses a problem in a particular area. (two sentences)
3. Features/contributions that are unique or novel on the system. (three or more sentences)
4. Performance evaluation results. (two sentences)

Here's an example from this paper:
--------------
This paper presents the design and implementation of MemPipe, a dynamic shared memory management system for high performance network I/O among virtual machines (VMs) located on the same host. MemPipe delivers efficient inter-VM communication with three unique features. First, MemPipe employs an inter-VM shared memory pipe to enable high throughput data delivery for both TCP and UDP workloads among co-located VMs. Second, instead of static allocation of shared memories, MemPipe manages its shared memory pipes through a demand driven and proportional memory allocation mechanism, which can dynamically enlarge or shrink the shared memory pipes based on the demand of the workloads in each VM. Third but not the least, MemPipe employs a number of optimizations, such as time-window based streaming partitions and socket buffer redirection, to further optimize its performance. Extensive experiments show that MemPipe improves the throughput of conventional (native) inter VM communication by up to 45 times, reduces the latency by up to 62%, and achieves up to 91% shared memory utilization.
--------------

The goal of the abstract is to help the readers decide whether to continue reading the rest of the paper or not. Do your readers a favor by not wasting their time. Present what you have upfront in the abstract.

Sunday, July 31, 2016

Zotero+WebDAV

I use Zotero to keep track of papers I (need to) read and recently I ran out of space in the Zotero servers because of attachments. Luckily, Zotero allows you to sync attachments using WebDAV. All I need is to setup WebDAV on my group's server. Space problem solved!

Apache config directive:

    Alias /zotero /zotero
    <Location /zotero>
        Options Indexes
        DAV On
        AuthType Basic
        AuthName "zotero"
        AuthUserFile /etc/apache2/webdav.password
        Require valid-user
    </Location>

Graduate Study: Full-Time vs Part-Time

A lot of students ask me which is better, full-time or part-time graduate study. Full-time study would mean that the student will take courses and write a thesis without other 'work' commitments. Part-time study on the other hand allows the student to have other 'work', such as being an instructor, in addition to taking courses and writing a thesis. I have done both, my MS was part-time and, currently, my PhD in full-time.

Part Time

Advantages
  • The student earns while studying because he/she is employed.
  • The student can improve on his/her teaching skills.
  • Student may be given reduced teaching load.

Disadvantage
  • Difficult to focus on research because of teaching-related activities such as grading and meetings.
Full Time

Advantages
  • Can focus more on research and hopefully finish on time.
  • Can earn additional income if student is on study leave with pay.
  • Increased opportunity to travel because of more free time.

Disadvantages
  • Dependent on scholarship for financial support.
  • Limited opportunity to practice teaching.

Essentially, it all depends on the student. Even if a student is doing the study part-time but has good time management skills, it is possible to finish on time.

Monday, June 27, 2016

Preventing memory over commit in linux

Some programs for HPC (such as plink) tries to eat as much memory as it can using the brk() system call. In order to prevent your system from crashing due to this, set the following kernel parameters as shown:

#echo "2" > /proc/sys/vm/overcommit_memory
#echo "75" > /proc/sys/vm/overcommit_ratio

Further Reading

Tuesday, June 14, 2016

Five important tasks in operating an OpenStack private cloud

After the initial cloud setup, there are important tasks for a successful cloud operation.

1. Account Processing

This task involves creating projects and users.

2. Backup Procedure

Backup of configuration settings (keystone, glance, nova), disk images, instances, and most importantly the database.

3. Restore Procedure

Using the backup data, a new/replacement node should be configured easily.

4. Upgrade Procedure

Upgrade of hardware, operating system, openstack version.

5. Regular Maintenance

Other stuff that must be done regularly such as documentation, testing, integrity check, etc.


Sunday, June 12, 2016

Recover corrupted InnoDB MySQL database for OpenStack

Summary

Due to power outages in the campus and not having backup power, the P2C cloud controller MySQL database was corrupted and the mysql daemon will not start. Worst of all, I don't have any backup! Essentially, P2C operation was halted. My main concern is to recover the disk images (glance), and if possible the user accounts (keystone) and the instances (nova). The controller uses Ubuntu Server(14.04) and MariaDB(5.5.49). In this guide I will discuss how I manage to PARTIALLY recover the cloud controller's database.

The corrupted file was /var/lib/mysql/ibdata1, a file used internally by mysql. It is possible to recover this file if  innodb_file_per_table is set, which LUCKILY is the case in my setup.

Steps

1. A backup of the entire /var/lib/mysql of the controller (will be referred to as M1) should be created first.

$sudo tar czvf p2c.controller.mysql.tar.gz /var/lib/mysql

2. Setup a different Ubuntu Server 14.04 machine(will be referred to M2) with MySQL version 5.6.30  (using apt-get). According to [1], 5.6 has the features needed to recover data from .frm and .ibd. More difficult methods exist [2][3].

3. Extract the backup created in step one on M2.

$sudo -s
#tar xzvf p2c.controller.mysql.tar.gz

4. Install mysql utilities on M2

#apt-get install mysql-utilities

5. Start with recovering keystone. I wrote the script below to automate the process. Run this script inside keystone.(WARNING!!Make sure that there are no other MySQL databases on M2.) The variables at the start of the script must be set based on your settings. Running the script takes a while.

#cd var/lib/mysql/keystone
#chmod 755 innodb-recovery.sh 
#./innodb-recovery.sh

6. If all goes well, mysql now contains the recovered data. You can dump it now.

#mysqldump --force -u root -p keystone > keystone.recovered.sql

7. Go to inside glance and nova and perform steps 5 and 6.

8. Copy back keystone.recovered.sql, glance.recovered.sql, nova.recovered.sql to M1.

9. Restore the dumps.

#rm /var/lib/mysql/ibdata1
#rm /var/lib/mysql/ib_logfile0
#rm /var/lib/mysql/ib_logfile1
#service mysql restart

#mysql -u root -p
mysql>drop database keystone;
mysql>create database keystone;
mysql>drop database glance;
mysql>create database glance;
mysql>drop database nova;
mysql>create database nova;
mysql>exit

#mysql -u keystone -p keystone < keystone.recovered.sql
#mysql -u glance -p glance < glance.recovered.sql
#mysql -u nova -p nova < nova.recovered.sql

#service mysql restart

10. Reboot the controller and hope for the best.

#reboot

Final Words

The process described here did not recover the data 100%. The following were observed:
  • Glance/Nova image list is empty though the images are still in the filesystem. These images must be again created (glance image-create).
  • Fixed network must be created (nova network-create)
  • Floating IPs must be created (nova floating-ip-bulk create)
  • ALL information on instances were lost, VMs however remained in their respective hosts.
  • Security groups must be created again.
  • Key Pairs must be generated again. 
  • It took me a weekend for this.



Friday, June 10, 2016

Eaton 5L UPS on Ubuntu 14.04 using NUT

Lately I needed to configure an Eaton 5L UPS for the P2C controller node. Power outages are becoming frequent nowadays. The main requirement is that when a power outage occurs, controller services (keystone, glance, nova, etc.) should be stopped and the node will shutdown, as well as the UPS. I am running Ubuntu 14.04 for the controller so NUT is easy to install (sudo apt-get install nut). It took me half a day to configure this, so to save you some time you can check out my config. Enjoy!

Tuesday, June 7, 2016

Building a search engine using Nutch and Solr


Requirements
  • Ubuntu Server 14.04
  • Apache Solr(distro package) 3.6.2+dfsg-2
  • Apache Nutch 1.11

Installation
Follow the steps(Installing Solr using apt-get) outlined in [1] to install Solr. Download and extract the binary package of Nutch. In [2], follow the sections Verify your Nutch installation and Create a URL seed list. The configuration below is for indexing PDFs only.

After the installation, copy the $NUTCH_HOME/conf/schema.xml to /etc/solr/conf/schema.xml then restart tomcat

$sudo service tomcat6 restart

Download the nutch-site.xml below then replace the one in $NUTCH_HOME/conf with it.

nutch-site.xml


The script below recrawls the URLS. Make sure to change the SOLR_URL variable.
References
  1. https://www.digitalocean.com/community/tutorials/how-to-install-solr-on-ubuntu-14-04
  2. https://wiki.apache.org/nutch/NutchTutorial


Sunday, May 22, 2016

Allow public access to web applications deployed within a private network

Scenario:
You have a web application currently deployed within a private network. You would like your friends to test the application over the Internet.

Solution:
One option is to set up a public server with the configuration similar to the one on the private network. However, this approach is tedious and costly especially if for testing purposes only.
An alternative is to use the proxying capabilities of Apache, in particular reverse proxying.

Requirements:
  • Ubuntu Server 14.04 LTS with a public IP address and a private IP address (which is connected to the private network where the web application is running)
Steps:
  1. Install and enable the required apache modules.
    1. sudo apt-get install apache2 libapache2-mod-proxy-html
    2. sudo a2enmod proxy_html
  2. Edit the /etc/apache2/sites-enabled/000-default. Add the following inside the VirtualHost directive, reflecting your own settings.
    1. ProxyPass "/myapp" "http://private_ip_hosting_the_app"
    2. ProxyPassReverse "/myapp" "http://private_ip_hosting_the_app"
  3. Restart Apache.
    1. sudo service apache2 restart
  4. Visit http://public_ip_of_server/myapp to test
Final Notes:
You can add as many web applications as you want. Here is an example.

Sunday, April 17, 2016

Beeswarm Honeypot

Honeypots enable network security personnels to detect malicious activities in a network by tricking attackers that certain valid network services (such are web and ftp) are running on a server. In reality, however, honeypots simply log/analyze connection attempts initiated by an attacker.

Beeswarm is one of the many available open source honeypot software. It as a web frontend to allow for easier configuration. I made a minimal setup for our department just to check if some individuals/malwares are doing something interesting on our network. I will add updates on this post later.

Setup Notes:
On Ubuntu 14.04 server, use the following line to install the pyDes dependency. The one on the guide fails.
$ pip install http://twhiteman.netfirms.com/pyDES/pyDes-2.0.1.zip

Tuesday, March 8, 2016

Using the Python OpenStack API to access P2C

OpenStack has a Python API that can be used to develop services around it. In this post I describe how to use it for P2C.

1. After starting an Ubuntu 14.04 instance in P2C, log in to the instance using its floating IP. First create/edit /etc/apt/apt.conf.d/43proxy.

Acquire::http::Proxy "http://10.0.3.201:3142";

Run the following commands to install dependencies.
  • sudo apt-get install build-essential autoconf libtool pkg-config python-dev
  • wget https://bootstrap.pypa.io/get-pip.py
  • python get-pip.py
  • sudo pip install python-keystoneclient
  • sudo pip install python-glanceclient
  • sudo pip install python-novaclient

2. Edit /etc/hosts to add an entry for the frontend node:

10.0.3.101  cinterlabs-frontend

3. Create an rc file(guest-openrc.sh) file containing the following information(Don't forget to replace the values with your own credentials):

  export OS_TENANT_ID=028d55fc448046c6832db6527da13bf9
  export OS_TENANT_NAME=guest
  export OS_USERNAME=guest
  export OS_PASSWORD=guest_password

  export OS_AUTH_URL=http://cinterlabs-frontend:35357/v2.0

4. Create credentials.py containing the following:

#!/usr/bin/env python
import os

def get_keystone_creds():
    d = {}
    d['username'] = os.environ['OS_USERNAME']
    d['password'] = os.environ['OS_PASSWORD']
    d['auth_url'] = os.environ['OS_AUTH_URL']
    d['tenant_name'] = os.environ['OS_TENANT_NAME']
    return d

def get_nova_creds():
    d = {}
    d['username'] = os.environ['OS_USERNAME']
    d['api_key'] = os.environ['OS_PASSWORD']
    d['auth_url'] = os.environ['OS_AUTH_URL']
    d['project_id'] = os.environ['OS_TENANT_NAME']
    return d


5. Create list-instances.py containing the following:

#!/usr/bin/env python

from novaclient import client as novaclient
from credentials import get_nova_creds
creds = get_nova_creds()
nova = novaclient.Client("2", **creds)
print nova.servers.list()


6. Test the code by executing the commands below. You should see a list of instances.
  • source guest-openrc.sh
  • chmod 755 list-instances.py
  • ./list-instances.py
7. Sample code to start an instance using the Python API. Save as start-instance.py

#!/usr/bin/env python
import os
import time
from novaclient import client as novaclient
from credentials import get_nova_creds

creds = get_nova_creds()
nova = novaclient.Client("2",**creds)
image = nova.images.find(name="Ubuntu-14.04-server-amd64")
flavor = nova.flavors.find(name="p2c.1_512_5_1_1")
instance = nova.servers.create(name="frompython", image=image, flavor=flavor, key_name="jachermocilla-p2c")

# Poll until the status is no longer 'BUILD'
status = instance.status
while status == 'BUILD':
    time.sleep(5)
    # Retrieve the instance again so the status field updates
    instance = nova.servers.get(instance.id)
    status = instance.status
print "status: %s" % status


To test:
  • source guest-openrc.sh
  • chmod 755 start-instance.py
  • ./start-instance.py

References:
  • http://www.ibm.com/developerworks/cloud/library/cl-openstack-pythonapis/
  • http://docs.openstack.org/developer/python-novaclient/

Thursday, January 28, 2016

APAN 41 Manila - Fellowship Summary Report

First of all I would like to thank APAN, especially the Fellowship Committee headed by Dr. Basuki Suhardiman, for awarding me a fellowship. I doubt that I will be able to attend such a meeting without a fellowship.

(Some Photos)

The main reason why I want to attend an APAN meeting is of course because I am very interested in the topics that will be discussed in the technical sessions, as well as the opportunity to network.  The workshops on cloud computing, network engineering, network research testbeds, and other co-located events are very much related to my research area and I learned a lot from attending them. I have a few blog entries that summarize some of the talks I attended.

The main realization that I came up with in this meeting is to never underestimate the value of COLLABORATION. The advances in research and education networks cannot be achieved without collaboration. I am amazed that majority of the presentations end with a slide having the logos of collaborating institutions and partners!

Unlike academic meetings that I frequent, the APAN meeting has a relatively informal and light atmosphere. Everyone seems to be at ease with each other and the senior members are very kind, accommodating, and generous. 

Attending this meeting also made me realize that my country, Philippines, is behind in terms of network infrastructure and capacity compared to its neighbors. I think improving this is just not a priority of the government as of the moment. It is great that DOST-ASTI is participating in this kind of activities that puts Philippines on the map. I also hope to be able to contribute, perhaps submit a research paper or join a working group to organize some workshops during the technical sessions. Interestingly, APAN publishes proceedings.

Since the APAN 41 Meeting is an international event, I also learned to appreciate the culture of people from other countries, particularly the other fellows. Talking to them gave me new perspectives in looking at things, not just on technical matters but also on other aspects of life. 

I believe that the APAN Fellowship was able to achieve its objectives and I hope that APAN will continue to support this program. I highly encourage others, especially the young ones, to participate and contribute to future APAN meetings.

MARAMING SALAMAT PO.

Wednesday, January 27, 2016

APAN 41 Manila - Day 4

27 January 2016

Today I attended the Network Engineering Workshop. The abstracts and slides of the talks are here. This workshop is by far the most organized which started and finished on time.

Majority of the talks described their home institution's current network infrastructure as well as their future plans. They are slowly moving to 100Gbps connections( the term is Long Fat Networks or LFNs) from 10Gbps.

There were also some presentations on protocol modifications (TCP in particular) in order to support 10/100 Gbps transfer over long distances across the Pacific Ocean.

Tuesday, January 26, 2016

APAN 41 Manila - Day 3

26 January 2016

Today I attended most of the sessions from the Cloud Working Group.


Talk: More than Three Years of OpenStack Clouds at NCI
Speaker: Andrew Howard, NCI High Performance, andrew.howard@anu.edu (via Skype)

In this talk, Andrew talked about the history of their OpenStack deployments at NCI. It is surprising to know that they have deployments using different versions. At this point in time, they are experimenting with using Ceph for storage. One person from the audience asked how they keep up with the rapid release of newer versions of OpenStack in order to keep updated. They use fibre channel for connectivity.


Talk: Application-Centric Overlay Cloud Utilizing Inter-Cloud
Speaker: Shigetoshi Yokohama

This is a short talk about the use of cloud in big data analysis. The middleware group where the speaker is a member of, focuses on automatic and quick creation of virtual clouds.  Other groups are working on aspects such as optimal resource selection and infrastructure.


Talk: SmartX Playground Update
Speaker: JongWon Kim, gwangju institute of technology

This talk is more of an update of the SmartX Playground which integrate the recent technologies such as SDN and IoT with clouds. 


Talk: National Computing Center Singapore
Speaker: 

This talk describes some updates on the NCC in Singapore. It is located in the 7th floor. They use the term InfinCloud because they use Infiniband interconnect. Their facility is state of the art.


Talk: Kreonet cloud update
Speaker: Yoonjoo Kwon, Kisti

This talk is about some updates on the Kreonet, including COREEN and RealLab. 


Talk: VM Migration on SDNs
Speaker: Kashir Nifan

This talk is about a VM migration mechanism implemented in Java. 


Talk: Collaboration with APAN WG
Speaker: Eric Yen, Academia Sinica

In this talk Eric emphasized that collaboration must be made in order to encourage members of other working groups to utilize the infrastructure developed from the Cloud WG. He said that the requirements should drive the cloud facility.


In the afternoon, I attended sessions on Future Internet Testbeds. Testbeds are real/virtual networks where researchers can experiment with new ideas.

The day ended with a fellowship dinner with some presentations from local talents.

Monday, January 25, 2016

APAN 41 Manila - Day 2

25 January 2016

Talk: IPv6 Working Group Meeting
Speaker: Nava C. Arjuman

In this talk, Nava gave an overview of an IoT case study in Japan. According to him, in adopting a new technology three things are usually considered: performance, security, cost. Also he said that market/industry needs drive innovation. In the case of Japan for example, energy conservation is a driving factor for the development and deployment of smart meters. Using the ECHONET-Lite standard, devices can be monitored. 


Talk: FELIX Tutorial: Federation of SDN testbeds for large-scale network experiments
Speaker: Jason H. Haga et al

The talk described the architecture of FELIX. Basically it is about resource sharing across different domains, both in terms of geographical and political boundaries. The idea is very similar to grid computing. Felix uses 'slices' to as the basic unit of resource that is being shared which may include compute resources or network resources. Felix follows a hierarchical design. Majority of Felix components are implemented in Python. Several cases studies were also presented.

Talk: Introduction to cloud computing and OpenStack
Speaker: Karlo Paulino

The talk gave an overview of the advantages of cloud computing, in particular IaaS. Karlo described the different components of OpenStack as well as RedHat's virtualization engine. A short demo was given to highlight the features.

Talk: Role of the IX Manager/Coordinator
Speaker: Jake Chin

This talk summarized some tips on how to be a good IX coordinator. One tip that I remembered is that doing your homework before any attempt to peer must be done first.

Talk: Blacklisting DNS using a software defined network switch
Speaker: Mon Nunez

This talk outlined a solution to blacklisting malicious DNS access using openflow and raspberry PI's.

Fellowship Meeting:

The general manager of APAN, Marcus, gave an introduction about APAN. All the APAN 41 fellows finally get to see each other face to face.

The day ended with a fellowship dinner.

Sunday, January 24, 2016

APAN 41 Manila - Day 1

24 January 2016

Talk: Identity and Access Management
Speaker: Terry Smith

Terry is from the Australian Access Federation(AAF) and he discussed some aspects of how they operate this project. He talked about federated identity management which is essentially an arrangement among multiple enterprises to use identification data. This arrangement requires parties to follow a trust model. The advantages include single sign on, reduction in work, updated data, improved security and usability. The main entities include the Identity Provider (IdP), Service Provider (SP), and the Users. When a user wants to use a service provided by a service provider, the service provider contacts the identity provider to get the user credentials needed to use the resource. In this scenario, active protection of user information must be guaranteed.

The federation is responsible for the following: maintains a list of IdP and SP, define rules, provide user support, operates a central discovery service, and tool development.

A common issue is how much information is to be shared among the entities. This can be resolved using a consent engine or government policies (as in the case of Singapore). 

Terry also talked about the types of federations which include mesh, hub-and-spoke, centralized, and mashups.

Operating a federation requires tools. Terry discussed some of these tools such as the AAF Registry Tool, Jagger, Janus, OpenConext, and others. A hands-on activity was also conducted using AAF Registry tools.

A brief overview of EduRoam, a location-independent wireless network, was discussed. This is an example service that uses federated identity and access management.

Operating a national federation is very much like operating a business requiring full-time staff and resources. Marketing the services is also important.

DOST-ASTI is starting to roll out a federated IAM for the Philippines.

Saturday, January 16, 2016

ICS awarded by CHED as Center of Excellence in IT Education for 2016-2018

The Commission on Higher Education (CHED) recently awarded my institute, the Institute of Computer Science , along with other HEI's in the country,  as a Center of Excellence in Information Technology Education for 2016-2018. This award is given to an institution offering CS/IT/IS programs, both graduate and undergraduate, that satisfies certain criteria set by CHED. In addition to the recognition, an institution is qualified to receive funding for proposed projects.

The evaluation conducted by CHED is mostly based on materials submitted, which are mostly written documents. On-campus interviews and inspections were also conducted for additional information and clarification. During the on-campus interview, according Prof. Connie Khan,  the CHED committee was impressed by the extensive involvement of ICS in national-level initiated projects as well as the active participation of faculty, students, and alumni in developing and enhancing the programs offered in the institute.