Intelicam v1.0

As said previously, I’ve been sharpening my Artificial Intelligence skills. In the mean time, I created Intelicam 1.0, a fully artificial intelligence based hybrid security system. It is like a DVR, but with a few twists. Intelicam can detect and recognize people on cameras, warn about new recordings and monitor/stream multiple cameras in real-time.

The system is compatible with cheap usb cameras, more professional IP cameras and remote RTSP stream sources. Videos can also be easily separated in different categories.

Currently the software runs under Ubuntu Linux 18.04, and in the near future will probably run under Windows 10 as well.

I am now replacing my old home DVR with Intelicam, as it is already performing so much better. No more time reviewing lots of videos showing the trees moving with the wind. =)


Etherea VR progress

From tutorial to landing on a planet and back to the space station. There are lots of things disabled yet, in the final game this galaxy will be full of continuous challenges.

The game was recorded directly in VR and the video shows one eye only. As I’m in a real rush these days, on this video I forgot to disable some of the eye visual effects for the recording, so they ended up showing (that rounded dark for example).

Please note that *all* planets and stars and other things are really there (no fake backgrounds) and fully exploitable.


TinyMP3 Player

Edit: TinyMP3 source-code is now available on github: https://github.com/imerso/tinymp3

Today I published a recent Node + React + Material-UI-Next experiment on Heroku — it’s called TinyMP3: https://tinymp3.herokuapp.com. As the name implies, the little web app streams mp3 from a remote server. Music can be played individually clicking on its name, or added to a (session only) playlist. Although I did not add buttons, clicking on top-left skips to previous, top-right to next, and center pauses/resumes playing playlist music. On desktop clients it also shows a simple fft spectrum bar in real-time.

It is also embedded on this post. The MP3 list on the open demo is some free mp3 music available on the internet, so there are no legal infringements, I guess. The app is meant to be installed on a server somewhere, to stream music to anywhere else — in my case I am now using it to serve the music library at home. I did write another module that can stream photos and videos as well, but only published the mp3 part for now.

It is really small and loads very fast, but it’s currently using a free Heroku account, which means that it’ll be sleeping most of the time — so it might take several seconds for it to wake up. Please be patient.

Awaking TinyMP3 — please wait…

I need to find time to write that post about React and Webpack, but that did not happen yet.


Quick NFS on Raspberry

If for some reason you find problems with broken Samba sharing on Raspbian, or just want something faster and more transparent, there is the NFS alternative — which, at least for me, is cooler than Samba. To access it from Windows is a bit more involved, and I won’t cover it on this quickie, as I’m really only needing to access it remotely from another Linux machine anyway.

So, it was a bit tricky to get it running and mounting automatically, but after spending some time on Google and experimenting/resuming a lot of different random forum suggestions, I finally got it working nicely and also mounting automatically. Here I try to resume it in just a few steps (first on a Rasberry terminal):


sudo apt-get install nfs-kernel-server

Then open /etc/exports for editing:


sudo nano /etc/exports

And add this line to map the usb hdd that is there (you need to change the path to your mounted usb drive, of course):


/mnt/media01/Public 192.168.1.0/255.255.255.0(ro,sync,no_subtree_check,no_root_squash)

The first path is where I’ve an external usb hdd mounted on this Raspberry, and the mask after it allows all machines coming from 192.168.1.* (my internal network) to read from it normally through the network. I decided to mount it as read-only (ro flag). You should adapt these to your case.

To overcome the warning “NFS Server: Not starting: portmapper is not running” that happens on Raspbian, I needed to use this:


sudo update-rc.d rpcbind enable && sudo update-rc.d nfs-common enable
sudo service rpcbind restart
sudo /etc/init.d/nfs-kernel-server restart

instead of just restarting NFS. Check if the NFS server is properly running with this:


sudo /etc/init.d/nfs-kernel-server status

Ok, so if it’s now running properly on Raspbian, go to the Desktop Linux from where you want to access that Raspberry share, and install the common client:


sudo apt-get install nfs-common

Edit fstab to add the share on that machine:


sudo nano /etc/fstab

Then add the map to the share:


# maps to raspberry's music nfs
192.168.1.2:/mnt/media01/Public /media/Public nfs auto,user,ro,hard,intr 0 0

192.168.1.2 is my Raspberry ip, you should change that. Also do not forget to create the mounting point (/media/Public) locally.

This specific combination of Client and Server configuration should made it automatically mount the shared nfs drive for me at boot time. If you don’t want to reboot right now, you can mount manually with


sudo mount -a

In my case, I never turn the Raspberry off, so it’s always available. If your Raspberry is not going to be online 24/7, you may prefer to remove the “auto” flag from the fstab line, then always use the mount -a when you want to access it on a given session.

 

From the many places I visited, these were the more important for me to find a successful configuration:

1 – https://askubuntu.com/a/7124

2 – https://raspberrypi.stackexchange.com/questions/10403/nfs-server-not-starting-portmapper-is-not-running

3 – https://forum.manjaro.org/t/nfs-doent-mount-on-startup-with-kernel-4-13/34762/17


A Quick Take on NodeJS

INTRODUCTION

As a relatively old Programmer, I’ve learned lots of different technologies and languages over the years. In addition to Games/VR, I also developed many commercial database systems in the past. I was a Senior Analyst at quite a few different companies, doing analysis, design and implementation (from dBase era to Clipper to Access to C++ MFC to PHP to .NET and Java). Lately I’ve been developing a new database application with Node.js (https://nodejs.org) — and decided to post a bit about it here.

Grossly speaking, Node.js is a console application which runs Javascript. It does basically the same as Asp.net and Java have been doing for more than a decade already, *but* it does it in a different way, using Javascript in both sides of the application. That ends up being practical, because when developing a web application with Java, .Net or PHP we have to use Javascript most of the time, anyway.

Although I’m writing this in 2018 while it has been around for a few years already, in my opinion, only recently it became a solid option against the big brothers. Javascript improved a lot, there are now good IDEs around and the entire thing took a somewhat clear shape. It is maturing. The Node.js environment is still lightweight, easy to install and portable (you can install the Node.js environment and develop servers even on an Android phone, for example). So, if done correctly, it becomes an interesting platform to develop with.

Node.js is basically Google V8 engine (https://developers.google.com/v8). Google V8 is used on Chrome to run Javascript code on the web. One of the nice things about Google V8 is that it is cross-platform, so it runs on many different Operating Systems.

What Node.js made was — grossly speaking — compile V8 as a standalone program, and let it execute Javascript from the command-line. You create a Javascript file, then run it with Node.js. Very grossly speaking, that is basically it. To make it more interesting, it provides some native system access — built-in and through plugins, so Javascript becomes more powerful than the sand-boxed Javascript from browsers. You can read/write local files, open network sockets, and have access to a number of native features — but still, cross-platform.

Now, as that made it so easy to write things and modules for it, these days there are thousands and thousands of plugins, libraries (in the form of packages) and code snippets out there, which can really make things too confusing for those who are just coming to the platform — but if you start from the start, the platform itself is very simple.

A FIRST EXAMPLE

Before we dive into code examples, please note that I’m not actually teaching how to code. You should either already know Javascript, or have enough programming experience in other languages and client-server architectures to infer the logic from the code posted ahead. This is not a Javascript tutorial, this is a Node.js architecture tutorial. If you want to learn Javascript, there are many free tutorials, I suggest this initial Google search: https://www.google.com/search?q=learn+javascript.

Let’s suppose for example that you wanted Javascript to sum 2+2 and show the result. Traditionally, that would require you to create a simple text file with HTML code, and open it with a web browser. Something like this:

<html>
	<head>
		<script>
			function sum(a, b)
			{
				return a + b;
			}
		</script>
	</head>
	<body>

		2 + 2 = <script>document.write(sum(2, 2))</script>

	</body>
</html>

You put the above inside an “index.html” and open it on your browser. And you see “2 + 2 = 4” on the page.

Now, with Node.js, you don’t need an actual browser. You just create a file with any name, anywhere — let’say: “sum.js” in your current directory — and run it directly. Something like this:

console.log("2 + 2 = " + sum(2, 2));

function sum(a, b)
{
	return a + b;
}

Then on a console window, run with

node sum.js

That is essentially the very same thing, except that you did not need a web browser for that — well, again grossly, Node.js *is* the browser running the Javascript for you, on the console. And that is the essence of Node.js, really. As simple as it looks.

A BUILT-IN FEATURE EXAMPLE

But then things get more interesting, of course, because it actually provides some built-in and transparent native access for the server-side Javascript (the client-side Javascript which will run on the browser will still be sand-boxed for obvious security reasons, of course) — please check the Node.js API to know all them; for example, it provides a built-in http module which allows you to quickly create a simple http server in Javascript, in just a few lines:

var http = require('http');
var port = 3000;

// create an http server that answers with "Hello World" when accessed
const server = http.createServer(function (req, res)
{
	res.writeHead(200, {'Content-Type': 'text/plain'});
	res.write('Hello World!');
	res.end();
});

// start listening for connections
server.listen(port, () =>
{
	console.log("Server running on port " + port);
});

Please note that on the above example, I used a built-in feature of Node.js. You don’t need anything else. Just create a file anywhere — let’s say, “http.js” — then type:

node http.js

And you’re done. That is a working “Hello World” web-server, and you can test it by opening http://localhost:3000 on your browser.

Here you will notice that you have both server and client already. The server is ran by “node http.js”, the client is ran with a browser, by opening the server’s url. The above is like a micro-apache server, let’s say. It runs as a console application, but when you connect to it through a browser, it returns html to the browser. Here the server returned just a string “Hello World!”, but it could return anything to the browser (which is, in fact, the client). You could easily extend the above bare-bones http server example by reading files from the hdd and writing them back to the pipeline, effectively creating a feature complete web-server. Just with Node.js alone.

ADDING PACKAGES WITH NPM

There are quite a few useful external packages out there, which can save precious development hours.

For example, you could replace the above bare-bones http server with something more powerful, and avoid writing a lot of http web-server handling code yourself. Node.js comes with npm, a Package Manager which have a large database of packages (libraries), including Express.js for example, probably the most used http server these days. Express.js is *not* a requirement, though. Not saying that you should, but you *could* very well write something similar yourself starting from the built-in http. But people have done that already, so you probably will want to use that instead.

Because now we’re going to use external packages downloaded from the internet to help our productivity, you will notice that now we have to actually create a directory for our project, and initialize npm there. OK, so let’s now install Express.js and rewrite our http web-server to use it:

mkdir web-server
cd web-server
npm init
(press enter to all questions, this is just a simple test anyway)
npm install express

You will notice that npm created a “node_modules” directory, with many sub-directories and files in there. Those are dependencies of Express.js, downloaded and installed by npm.

OK so, in the same directory, create a “server.js” with this:

// bare-bones express web-server

const express = require('express');
const app = express();

var port = 3000;
app.get('/', (req, res) => res.send('Hello World!'));
app.listen(port, () => console.log('Listening on port ' + port));

And run with

node server.js

The server starts listening for http connections on port 3000. Just point your browser to http://localhost:3000 and you’ll see the “Hello World” returned by the web-server. I won’t dive into the additional features of Express.js here, please refer to its homepage (https://expressjs.com) to find more. I just wanted to progressively show why and how additional packages are added to a Node.js project.

Soon you realize that the possibilities are really huge. You can add a MySQL package, and have a simple console app that access a database, or going further, access a database and then send the query results down the web pipeline through the web server. Add a UI package and have beautiful client rendering of those database queries served through the web. And so on.

Let’s create a simple console app that lists people’s names by querying MySQL. This first MySQL example won’t have a web-server, it’ll just list the query results on console. We will use the MySQL package for that.

mkdir mysql-console
cd mysql-console
npm init
(press enter until npm finishes initialization)
npm install mysql

Now we will assume that we have a database “node_tut” with a table “people” and this table has only “id” and “name” fields, which we want to list on console. If you need to learn MySQL, please head to https://www.google.com/search?q=learn+mysql – as with everything else, you’ll find many free resources to study.

Create a file called “mysql-console.js” on the project’s root directory:

// table is: id and name

const mysql = require('mysql');

var con = mysql.createConnection
(
	{
		host: "localhost",
		user: "some_user",
		password: "some_password",
		database: "node_tut"
	}
);

con.connect(
	function(err)
	{
		if (err) throw err;
		console.log("Connected!");

		console.log("Querying...");
		var query = con.query("select id,name from people", (err, rows) =>
		{
			if (err) throw err;

			rows.forEach( (row) =>
			{
				console.log("row: " + row.id + " - " + row.name);
			});

			console.log("Done.");
		});

		con.end();
	}
);

Run it with

node mysql-console.js

And you should immediately see a list of people that are registered on that fictional database. Directly on the console.

Now, we want to go further and access that people’s list from the web. First, let’s add Express.js to the packages list of that application:

npm install express

Express.js should be available in our code now, so let’s add support for it — edit the file mysql-console.js and replace the code with this:

// table is: id and name

const express = require('express')
const mysql = require('mysql')
const app = express()

var port = 3000;


// queries de db and sends the result down the web
function list(res)
{
	var list = "";

	var con = mysql.createConnection
	(
		{
			host: "localhost",
			user: "some_user",
			password: "some_password",
			database: "node_tut"
		}
	);

	con.connect(
		function(err)
		{
			if (err) throw err;
			console.log("Connected!");

			console.log("Querying...");
			list = "<h3>PEOPLE:</h3><br>";

			var query = con.query("select id,name from people", (err, rows) =>
			{
				if (err) throw err;

				rows.forEach( (row) =>
				{
					list += row.id + " - " + row.name + "<br>";
				});

				res.send(list);
				console.log("Done.");
			});

			con.end();
		}
	);
}

// express web server
app.get('/', (req, res) => res.send("<a href='/list'>List People</a>"));
app.get('/list', (req, res) => list(res));
app.listen(port, () => console.log('Example app listening on port ' + port));

Run it again with

node mysql-console.js

Note that although the name is the same, this is now a web-server application. It does not list people in the console anymore, but on the web. The web-server will actually return a link when the homepage “/” is accessed, and the link will get another route (as it’s commonly called these days) “/list”, and that “/list” route will call the list() function to query the database, and return the list of people as html. Access it with http://localhost:3000 and see the people from the database on a browser.

One note here is that in the above example we’re returning the list as html directly. That is what would be called “server rendering”, that is, we return the rendered html code directly. That is to keep things clear and simple, but normally I would return the list as a data array structure (JSON for example) and create the html client-side from that array.

Of course, that is all bare-bones, but it hopefully shows the concept in full, with minimal code and installed packages, while keeping clear distinction of parts.

CONCLUSION

I think that this post is too big already, so I’ll stop here for now. I would like to make another post in the future, regarding React.js and Webpack, if time permits. There are many free resources about them though, so you surely don’t need to wait for me.

To get a solid initial grip on NodeJS, and expand on it, I suggest that you start by visiting its API documentation: https://nodejs.org/dist/latest-v9.x/docs/api — please note that this URL will certainly change over time, when newer versions arrive. Maybe you will prefer to simply start from the main site: https://nodejs.org and find the API docs from there.

After you get a solid understanding of the basis and built-in features, then you’ll be much less confused by the myriad of available packages. Start small and grow solid, it’s not as hard as it seems, if you have patience to learn progressively while you practice each different level.

I hope that this tutorial was useful in some way. Thanks for reading, and good luck.


Ritmo: VR Rhythm Game for Oculus Rift

Here is a new game skeleton of a VR game for Oculus Rift — and possibly Vive in the near future. I just wanted something fun and quick to develop, as a test bed for my new VR interface lib.

The player must hit objects that come into his direction in synch with the playing music. That results in a good physical exercise, really. And that is basically it.

It of course needs more work — rich and colorful environments, more effects, rewards, statistics, leader boards etc — but the bare-bones game works already, as shown in the above video.

More about this later on, thanks.


Solving Ubuntu stuck on Login Screen

After a simple apt-get update & apt-get upgrade, next time I booted Ubuntu 16.04LTS it was stuck on a login loop, not allowing me to enter the system normally. Searching on Google I found out that many people have had the same problem. I then tried almost everything that was suggested (except for some extremely risky ones which would not work anyway), but nothing fixed the problem for me.

In desperation I tried one thing that ended up working well, and I want to share my solution with you. First, at boot time (on that initial Grub boot selection screen), I chose the previous kernel to boot with. It booted normally, and I could now login to the system again. Then I went to http://kernel.ubuntu.com/~kernel-ppa/mainline and from the list found the latest Kernel available (it was v4.15.10 at this time), then downloaded these files:

linux-headers-4.15.10-041510_4.15.10-041510.201803152130_all.deb
linux-headers-4.15.10-041510-generic_4.15.10-041510.201803152130_amd64.deb
linux-image-4.15.10-041510-generic_4.15.10-041510.201803152130_amd64.deb

Note that those were two linux-headers (all and amd64, as I’m using 64bit Linux), plus the linux-image, all for the same 4.15.10 Kernel version. You may prefer to choose the low-latency Kernel but they’re really meant for some specific use cases of Linux, so I went with the standard generic one.

After getting the three files, I installed all them at once by typing inside their download directory:

sudo dpkg -i *.deb

Then rebooted. I don’t really know what exactly caused the login problem, or if what I did will solve that for everyone, but for me that fixed the login problem and I also ended up with the latest stable Kernel, while keeping my system intact. I don’t recommend blindly following most crazy suggestions you find out there (uninstalling parts of the system or installing more and more random packages). I believe that upgrading the Kernel was straightforward, did not add or remove any packages, and was safe because if it did not work, I could simply select another older Kernel at boot, and try something else again.

Good luck.


Quickly Install Samba on Raspberry (or any Linux, that is)

So, I wanted to quickly put some USB HDD shared on my network using a Raspberry Pi. This is something simple, but I wasted a bit of extra time to get working this time, so I’m posting here just in case I forget again in the future, or someone comes looking for the same quick solution. On the Raspberry:

If you don’t have nano editor installed, install it first (or skip this if you have it already):

sudo apt-get install nano

Now install and immediately after open smb.conf (the Samba configuration file) for editing:

sudo apt-get install samba samba-common-bin
sudo nano /etc/samba/smb.conf

Put the following lines in the end of smb.conf (only things you really *need* to customize are the path in the second part and maybe the workgroup if your network workgroup is not the default Windows WORKGROUP):

[global]
  workgroup = WORKGROUP
  wins support = yes
  netbios name = Raspberry
  server string =
  domain master = no
  local master = yes
  preferred master = yes
  os level = 35
  security = user

[public]
  comment = Public
  path = /mnt/media01/Public
  public = yes
  writable = yes
  create mask = 0777
  directory mask = 0777

Remember that the path above must be changed to the actual path of the mounted USB HDD.

Save (CTRL+O then ENTER to save, CTRL+X to leave nano) and then restart Samba:

sudo /etc/init.d/samba reload

Now Windows Explorer should see the shared folder on the network. Please note that this is not set for high security. I don’t have strangers accessing my Wifi network, so I’m not too paranoid with that. If you need stronger security than the quickie above, please look elsewhere.


Real-Time Brain Wave Analyzer

The EEG (electroencephalogram) is a neurological test which can reveal abnormalities in people’s brain waves. The EEG device is traditionally found only in medical facilities. Most people will take an EEG test at least once on their lives. EEG devices have a few dozens of electrical sensors which can read brain activity and record those activities for later analysis.

Traditional EEG device

Some years ago, a few portable, consumer oriented EEG devices have appeared, one of them being the Emotiv Epoc, a 14 channels wireless EEG device which, although not comparable to an industrial EEG, also allows for some interesting brain wave experiments and visualizations. Interesting enough, differently from the traditional EEG devices which will only record the brain waves for medical analysis of brain health, the portable device also provides some basic facilities for coarse “mind reading”, that is, through some clever real-time analysis of user’s brain activities, it can most of the time, with some effort, detect a few limited “thoughts” like push, pull and move. So, the user can (again, in a very limited way) effectively control the computer with his mind. It even provides an SDK for advanced users and programmers to develop their own applications.

That is all cool, but that was not really what I was looking for, though. I wanted more low-level device access, direct to the metal, raw sensors reading for research purposes. I posted a new Youtube video showing the first prototype of a real-time brain wave analyzer that I just started to develop for personal AI research purposes.

At the time of recording, I was wearing an Emotiv Epoc and the waves were from real, raw EEG data being read from my own brain. I used a hacked low-level driver (on Linux) to get complete access to the raw sensors of the device, instead of using its built-in software which provides limited access to its sensor readings. The hacked driver was not written by me though — when searching for low-level Epoc protocol info, I found Emokit-c which already opened the full access that I wanted. So, from there I connected the device data stream to the 3D engine and made the first prototype over the weekend.

For now the prototype is rough yet, just showing raw waves with no further processing. In the near future, I plan to have a Neural Network connected to this raw EEG analyzer, learning patterns from thoughts and emotions, and doing more useful (possibly serious, medical related) things.

Although the prototype seems to be just a graphics demo, as said above it’s not just fancy rendering, it’s in reality talking to a real device and getting real raw EEG data from its sensors, which will later on be processed by a Neural Network with serious intents.

More about this in the future, when time permits.


Cross-Platform Neural Network Library

I have been spending some time sharpening my skills on Artificial Intelligence again. I still like very much to write my own code and completely understand and dominate the object of study, so that is what I did recently — a personal neural network framework entirely written from scratch. I struggled a little bit with the different back-propagation gradient formulas, but after dominating those details I am satisfied with the current results. The acquired knowledge helps me to better understand bigger frameworks and modern progress.

This tiny unnamed Neural Network library of mine is cross-platform, compatible with basically all hardware platforms and Operating Systems, and still small and with no external dependencies at all. It is fully self-contained. I like that, because deployment is easy. It can be integrated in any app, on desktop, mobile and embedded platforms in a matter of minutes.

The following simple video shows basic learning and recognition of digits. I ran it inside Unity3D because of its easiness for visual prototyping, but as said, the NN library itself has no dependencies, so it’s not tied to Unity or any other engines or libraries.

I will be constantly adding features to this personal lib — it’s not just for digits recognition! — and I intend to have it running on an intelligent robot which is going to entertain the family for a long time.

More on this later, thanks for reading.