Month: October 2017

Cross-Platform Neural Network Library

I have been spending some time sharpening my skills on Artificial Intelligence again. I have been around and, although it’s nice, powerful and useful, I still like very much to write my own code and completely understand and dominate the object of study, so that is what I did recently — a personal neural network framework entirely written from scratch. I struggled a little bit with the different back-propagation gradient formulas, but after dominating those details I am satisfied with the current results. The acquired knowledge helps me to better understand bigger frameworks like Tensorflow, for example.

This tiny unnamed Neural Network library of mine is cross-platform, compatible with basically all hardware platforms and Operating Systems, and still small and with no external dependencies at all. It is fully self-contained. I like that, because deployment is very easy, and I can easily integrate it in any app, on desktop, mobile and embedded platforms in a matter of minutes.

The following simple video shows basic learning and recognition of digits. I ran it inside Unity3D because of its easiness for visual prototyping, but as said, the NN library itself has no dependencies, so it’s not tied to Unity or any other engines or libraries.

I will be constantly adding features to this personal lib — it’s not just for digits recognition! — and I intend to have it running on an intelligent robot which is going to entertain the family for a long time. =)

More on this later, thanks for reading.



Mobile Game and Neural Networks

At this moment I am only working on Etherea² on my spare time, at weekends. In normal work days I am programming a new (and completely unrelated) commercial mobile game and I’m honestly happy to have a paid work, so things keep going nicely. The game is relatively simple and nice — a real-time strategy game — but I won’t give any further details as it is not my property and I normally don’t comment on commercial work.

But then, back to weekends programming, I’ve been creating my own Deep Neural Network library. It is not tied or dependent on any particular engine, so it can be used both on database-related systems (which I worked in the past and have been contemplating again) and VR/Games.

Neural Networks are awesome, as they try to emulate how real neurons work. The artificial ones also have dentrites (which I resume to input/output ports) and synapses (which I resume to connections between different neurons), and from that simple structure, some really interesting things can be created. As you probably know already, they can be used in many different domains: vision, voice and general cognition are some of them.

For now, my implementation is all CPU based. When it becomes rock solid and fully featured, I will consider moving some or maybe all parts to a GPU implementation, using Compute Shaders. Right now I’m still satisfied with CPU performance, so no urgency on GPU translation yet.

The following screenshot is taken from a small network visualized in Unity3D — so I can quickly test/confirm that the actual topologies and synapses are being created correctly.

The above network is a Feedforward one, showing an extremely simple (but correct and useful) neural net in this case, however the underlying structure can automatically build and interconnect a neural net of any size (only limited by memory), using either Perceptrons or Sigmoid neurons.

As for usage, I do have a list of plans for it. One of them is for Etherea² life and behaviors simulation, the other is a robot which is now waiting for a second iteration of development.

More on this later. Thanks for reading.

Reality Shock

That is right. After almost a week on Indiegogo, Etherea² has only about 130 visits and one brave backer, so in practice it is still unknown to the world, lost in the void. It did not generate any traction. A too premature campaign without any marketing at all, that is the problem.

I still believe in it as commercially viable, though. What it needed was a small pre-investment so that it could be properly developed until reaching the Minimal Viable Product (MVP), when it would be able to generate traction — before starting a crowdfunding campaign, that is.

Anyway, I will continue working in the project, using my spare time — as always — regardless of funding. I just love this project, and I want it completed even if it becomes a game for me to play with my kids — while also teaching programming to them. =)

Thanks for reading.

Vegetation in Etherea²

Dev Log opened

Hi there. So, after a few years, I just decided to reopen a public log. Here I’ll be posting about my personal progress in general, mostly about programming and robotics, and some times about real-life subjects. Welcome and feel free to leave me a comment. Thanks.

Work on Etherea²

Etherea² for Unity3D can potentially handle an universe with trillions of trillions of trillions of kilometers, with 0.1mm accuracy everywhere, while Unity3D (and almost all the current 3D engines out there) can only handle scenes of about 64000 meters out of the box. That size limit can’t fit a single Etherea² planet, much less its virtually infinite number of planets. So, Etherea² effectively extends Unity3D limits to handle more than the size of the observable universe.

But it does not stop there, the size of the universe is just one of the many challenges that Etherea² solves. It can also render a 1cm ant very close to the camera, at the same time that it renders a 13000km wide planet immediately behind it, plus another planet a million kilometers behind both, without starting a z-buffer fight. Plus you can leave something here on this planet, fly to a different star system light-years away, explore another planet, then come back to the first and find the same object at the same exact place, without any interruptions or loading screens. Also you don’t need a super computer for that, in fact you could be doing that using your Android phone — yes, at a reduced resolution and quality compared to the desktop version, but it does run on mobiles.

Just to remember, here is a reference video from the first ever Etherea¹. This old 64kb techdemo was available for download in 2010-2011, and was entirely done in C++ and OpenGL:

Out of curiosity, even with texture and geometry compression, a single planet done by hand would take something like 4gb of disk space. But Etherea¹ was 100% procedural, and the entire tech-demo with 8 full planets and background music used only 64kb of disk space.

Anyway, I ended up stopping all work on Etherea¹ (all versions, including the Unity3D one) around 2013. Daily job sucked me entirely and I could not handle the side work on Etherea. It was then frozen for a few years.

I am working on Etherea² on and off for a few months now. Initially I was writing it in pure Javascript/WebGL² ( demo here: ), when I decided to jump to Unity3D again, because it’s currently the most used 3D engine in the world, which eases adoption. As I plan to have it adopted by other programmers and teams, that was a good choice, I guess.

The entire “Etherea” thing is not just a plain game. I want it to become an open virtual reality platform where people can easily populate with their own content, and even create new universes — and games that run directly in those universes — by building, scripting and exploring their own and others creations. I would like to eventually visit a distant space station and find a cinema built by someone, where there are movies to watch. Or find a race track on a planet’s surface, where people meet to race for the fun of it, etc. All that seamlessly streamed, without “loading…” screens or simulation interruptions. Ambitious, I know, but that is to be implemented gradually, in iteration layers, not all at once.

I am still working on the building tools, which are in part heavily inspired by how the Cube2 ( ) handles that. The building tools will also allow normal polygonal models to be imported (initially only .obj format, maybe other formats later on). There will also be internal C# and even Shader creation support, so it’ll be possible for more advanced users to create new procedural object types — a huge fractal structure floating in space, for example — and spread those new object types throughout the universe. I am sure that there are many creative people who can populate the huge space with some really unexpected things.

Here is a short, preliminar Etherea² video:

I want it to become open-source, but I also need money to help me push it to completion. For now, while looking for an investor and/or partner, I am trying to crowdfund it through Indiegogo:

Until the date of this post, the campaign had almost no visibility yet. The page is just sitting there, with only a single brave backer (who I thank so much), and practically no daily visits. Lets hope the situation changes and it ends up being successfully funded and finally opened on its entirety to all the backers.


Thanks for reading.