Ritmo: VR Rhythm Game for Oculus Rift

Here is a new game skeleton of a VR game for Oculus Rift — and possibly Vive in the near future. I just wanted something fun and quick to develop, as a test bed for my new VR interface lib.

The player must hit objects that come into his direction in synch with the playing music. That results in a good physical exercise, really. And that is basically it.

It of course needs more work — rich and colorful environments, more effects, rewards, statistics, leader boards etc — but the bare-bones game works already, as shown in the above video.

More about this later on, thanks.

Solving Ubuntu stuck on Login Screen

After a simple apt-get update & apt-get upgrade, next time I booted Ubuntu 16.04LTS it was stuck on a login loop, not allowing me to enter the system normally. Searching on Google I found out that many people have had the same problem. I then tried almost everything that was suggested (except for some extremely risky ones which would not work anyway), but nothing fixed the problem for me.

In desperation I tried one thing that ended up working well, and I want to share my solution with you. First, at boot time (on that initial Grub boot selection screen), I chose the previous kernel to boot with. It booted normally, and I could now login to the system again. Then I went to http://kernel.ubuntu.com/~kernel-ppa/mainline and from the list found the latest Kernel available (it was v4.15.10 at this time), then downloaded these files:


Note that those were two linux-headers (all and amd64, as I’m using 64bit Linux), plus the linux-image, all for the same 4.15.10 Kernel version. You may prefer to choose the low-latency Kernel but they’re really meant for some specific use cases of Linux, so I went with the standard generic one.

After getting the three files, I installed all them at once by typing inside their download directory:

sudo dpkg -i *.deb

Then rebooted. I don’t really know what exactly caused the login problem, or if what I did will solve that for everyone, but for me that fixed the login problem and I also ended up with the latest stable Kernel, while keeping my system intact. I don’t recommend blindly following most crazy suggestions you find out there (uninstalling parts of the system or installing more and more random packages). I believe that upgrading the Kernel was straightforward, did not add or remove any packages, and was safe because if it did not work, I could simply select another older Kernel at boot, and try something else again.

Good luck.

Quickly Install Samba on Raspberry (or any Linux, that is)

So, I wanted to quickly put some USB HDD shared on my network using a Raspberry Pi. This is something simple, but I wasted a bit of extra time to get working this time, so I’m posting here just in case I forget again in the future, or someone comes looking for the same quick solution. On the Raspberry:

If you don’t have nano editor installed, install it first (or skip this if you have it already):

sudo apt-get install nano

Now install and immediately after open smb.conf (the Samba configuration file) for editing:

sudo apt-get install samba samba-common-bin
sudo nano /etc/samba/smb.conf

Put the following lines in the end of smb.conf (only things you really *need* to customize are the path in the second part and maybe the workgroup if your network workgroup is not the default Windows WORKGROUP):

  workgroup = WORKGROUP
  wins support = yes
  netbios name = Raspberry
  server string =
  domain master = no
  local master = yes
  preferred master = yes
  os level = 35
  security = user

  comment = Public
  path = /mnt/media01/Public
  public = yes
  writable = yes
  create mask = 0777
  directory mask = 0777

Remember that the path above must be changed to the actual path of the mounted USB HDD.

Save (CTRL+O then ENTER to save, CTRL+X to leave nano) and then restart Samba:

sudo /etc/init.d/samba reload

Now Windows Explorer should see the shared folder on the network. Please note that this is not set for high security. I don’t have strangers accessing my Wifi network, so I’m not too paranoid with that. If you need stronger security than the quickie above, please look elsewhere.

Real-Time Brain Wave Analyzer

The EEG (electroencephalogram) is a neurological test which can reveal abnormalities in people’s brain waves. The EEG device is traditionally found only in medical facilities. Most people will take an EEG test at least once on their lives. EEG devices have a few dozens of electrical sensors which can read brain activity and record those activities for later analysis.

Traditional EEG device

Some years ago, a few portable, consumer oriented EEG devices have appeared, one of them being the Emotiv Epoc, a 14 channels wireless EEG device which, although not comparable to an industrial EEG, also allows for some interesting brain wave experiments and visualizations. Interesting enough, differently from the traditional EEG devices which will only record the brain waves for medical analysis of brain health, the portable device also provides some basic facilities for coarse “mind reading”, that is, through some clever real-time analysis of user’s brain activities, it can most of the time, with some effort, detect a few limited “thoughts” like push, pull and move. So, the user can (again, in a very limited way) effectively control the computer with his mind. It even provides an SDK for advanced users and programmers to develop their own applications.

That is all cool, but that was not really what I was looking for, though. I wanted more low-level device access, direct to the metal, raw sensors reading for research purposes. I posted a new Youtube video showing the first prototype of a real-time brain wave analyzer that I just started to develop for personal AI research purposes.

At the time of recording, I was wearing an Emotiv Epoc and the waves were from real, raw EEG data being read from my own brain. I used a hacked low-level driver (on Linux) to get complete access to the raw sensors of the device, instead of using its built-in software which provides limited access to its sensor readings. The hacked driver was not written by me though — when searching for low-level Epoc protocol info, I found Emokit-c which already opened the full access that I wanted. So, from there I connected the device data stream to the 3D engine and made the first prototype over the weekend.

For now the prototype is rough yet, just showing raw waves with no further processing. In the near future, I plan to have a Neural Network connected to this raw EEG analyzer, learning patterns from thoughts and emotions, and doing more useful (possibly serious, medical related) things.

Although the prototype seems to be just a graphics demo, as said above it’s not just fancy rendering, it’s in reality talking to a real device and getting real raw EEG data from its sensors, which will later on be processed by a Neural Network with serious intents.

More about this in the future, when time permits.

Cross-Platform Neural Network Library

I have been spending some time sharpening my skills on Artificial Intelligence again. I have been around https://www.tensorflow.org and, although it’s nice, powerful and useful, I still like very much to write my own code and completely understand and dominate the object of study, so that is what I did recently — a personal neural network framework entirely written from scratch. I struggled a little bit with the different back-propagation gradient formulas, but after dominating those details I am satisfied with the current results. The acquired knowledge helps me to better understand bigger frameworks like Tensorflow, for example.

This tiny unnamed Neural Network library of mine is cross-platform, compatible with basically all hardware platforms and Operating Systems, and still small and with no external dependencies at all. It is fully self-contained. I like that, because deployment is very easy, and I can easily integrate it in any app, on desktop, mobile and embedded platforms in a matter of minutes.

The following simple video shows basic learning and recognition of digits. I ran it inside Unity3D because of its easiness for visual prototyping, but as said, the NN library itself has no dependencies, so it’s not tied to Unity or any other engines or libraries.

I will be constantly adding features to this personal lib — it’s not just for digits recognition! — and I intend to have it running on an intelligent robot which is going to entertain the family for a long time. =)

More on this later, thanks for reading.



Mobile Game and Neural Networks

At this moment I am only working on Etherea² on my spare time, at weekends. In normal work days I am programming a new (and completely unrelated) commercial mobile game and I’m honestly happy to have a paid work, so things keep going nicely. The game is relatively simple and nice — a real-time strategy game — but I won’t give any further details as it is not my property and I normally don’t comment on commercial work.

But then, back to weekends programming, I’ve been creating my own Deep Neural Network library. It is not tied or dependent on any particular engine, so it can be used both on database-related systems (which I worked in the past and have been contemplating again) and VR/Games.

Neural Networks are awesome, as they try to emulate how real neurons work. The artificial ones also have dentrites (which I resume to input/output ports) and synapses (which I resume to connections between different neurons), and from that simple structure, some really interesting things can be created. As you probably know already, they can be used in many different domains: vision, voice and general cognition are some of them.

For now, my implementation is all CPU based. When it becomes rock solid and fully featured, I will consider moving some or maybe all parts to a GPU implementation, using Compute Shaders. Right now I’m still satisfied with CPU performance, so no urgency on GPU translation yet.

The following screenshot is taken from a small network visualized in Unity3D — so I can quickly test/confirm that the actual topologies and synapses are being created correctly.

The above network is a Feedforward one, showing an extremely simple (but correct and useful) neural net in this case, however the underlying structure can automatically build and interconnect a neural net of any size (only limited by memory), using either Perceptrons or Sigmoid neurons.

As for usage, I do have a list of plans for it. One of them is for Etherea² life and behaviors simulation, the other is a robot which is now waiting for a second iteration of development.

More on this later. Thanks for reading.

Reality Shock

That is right. After almost a week on Indiegogo, Etherea² has only about 130 visits and one brave backer, so in practice it is still unknown to the world, lost in the void. It did not generate any traction. A too premature campaign without any marketing at all, that is the problem.

I still believe in it as commercially viable, though. What it needed was a small pre-investment so that it could be properly developed until reaching the Minimal Viable Product (MVP), when it would be able to generate traction — before starting a crowdfunding campaign, that is.

Anyway, I will continue working in the project, using my spare time — as always — regardless of funding. I just love this project, and I want it completed even if it becomes a game for me to play with my kids — while also teaching programming to them. =)

Thanks for reading.

Vegetation in Etherea²

Dev Log opened

Hi there. So, after a few years, I just decided to reopen a public log. Here I’ll be posting about my personal progress in general, mostly about programming and robotics, and some times about real-life subjects. Welcome and feel free to leave me a comment. Thanks.

Work on Etherea²

Etherea² for Unity3D can potentially handle an universe with trillions of trillions of trillions of kilometers, with 0.1mm accuracy everywhere, while Unity3D (and almost all the current 3D engines out there) can only handle scenes of about 64000 meters out of the box. That size limit can’t fit a single Etherea² planet, much less its virtually infinite number of planets. So, Etherea² effectively extends Unity3D limits to handle more than the size of the observable universe.

But it does not stop there, the size of the universe is just one of the many challenges that Etherea² solves. It can also render a 1cm ant very close to the camera, at the same time that it renders a 13000km wide planet immediately behind it, plus another planet a million kilometers behind both, without starting a z-buffer fight. Plus you can leave something here on this planet, fly to a different star system light-years away, explore another planet, then come back to the first and find the same object at the same exact place, without any interruptions or loading screens. Also you don’t need a super computer for that, in fact you could be doing that using your Android phone — yes, at a reduced resolution and quality compared to the desktop version, but it does run on mobiles.

Just to remember, here is a reference video from the first ever Etherea¹. This old 64kb techdemo was available for download in 2010-2011, and was entirely done in C++ and OpenGL:

Out of curiosity, even with texture and geometry compression, a single planet done by hand would take something like 4gb of disk space. But Etherea¹ was 100% procedural, and the entire tech-demo with 8 full planets and background music used only 64kb of disk space.

Anyway, I ended up stopping all work on Etherea¹ (all versions, including the Unity3D one) around 2013. Daily job sucked me entirely and I could not handle the side work on Etherea. It was then frozen for a few years.

I am working on Etherea² on and off for a few months now. Initially I was writing it in pure Javascript/WebGL² ( demo here: https://imersiva.com/demos/eo ), when I decided to jump to Unity3D again, because it’s currently the most used 3D engine in the world, which eases adoption. As I plan to have it adopted by other programmers and teams, that was a good choice, I guess.

The entire “Etherea” thing is not just a plain game. I want it to become an open virtual reality platform where people can easily populate with their own content, and even create new universes — and games that run directly in those universes — by building, scripting and exploring their own and others creations. I would like to eventually visit a distant space station and find a cinema built by someone, where there are movies to watch. Or find a race track on a planet’s surface, where people meet to race for the fun of it, etc. All that seamlessly streamed, without “loading…” screens or simulation interruptions. Ambitious, I know, but that is to be implemented gradually, in iteration layers, not all at once.

I am still working on the building tools, which are in part heavily inspired by how the Cube2 ( http://sauerbraten.org ) handles that. The building tools will also allow normal polygonal models to be imported (initially only .obj format, maybe other formats later on). There will also be internal C# and even Shader creation support, so it’ll be possible for more advanced users to create new procedural object types — a huge fractal structure floating in space, for example — and spread those new object types throughout the universe. I am sure that there are many creative people who can populate the huge space with some really unexpected things.

Here is a short, preliminar Etherea² video:

I want it to become open-source, but I also need money to help me push it to completion. For now, while looking for an investor and/or partner, I am trying to crowdfund it through Indiegogo:


Until the date of this post, the campaign had almost no visibility yet. The page is just sitting there, with only a single brave backer (who I thank so much), and practically no daily visits. Lets hope the situation changes and it ends up being successfully funded and finally opened on its entirety to all the backers.


Thanks for reading.