A Quick Take on NodeJS

INTRODUCTION

As a relatively old Programmer, I’ve learned lots of different technologies and languages over the years. In addition to Games/VR, I also developed many commercial database systems in the past. I was a Senior Analyst at quite a few different companies, doing analysis, design and implementation (from dBase era to Clipper to Access to C++ MFC to PHP to .NET and Java). Lately I’ve been developing a new database application with NodeJS (https://nodejs.org). This new application will be publicly accessible most probably next month.

Grossly speaking, NodeJS is nothing more than a console application which runs Javascript. I am not telling that to criticize, but instead, to make it easier for newcomers to understand what it really is. Also it does basically the same as Asp.net and Java have been doing for more than a decade already, *but* it does it in a different way, using Javascript in both sides of the application, which ends up being really cool.

Although I’m writing this in 2018 while it has been around for a few years already, in my opinion, only recently it became a solid option against the big brothers. Javascript improved a lot, there are now good IDEs around and the entire thing took a somewhat clear shape. It is maturing. The NodeJS environment is still lightweight, easy to install and portable (you can install the NodeJS environment and develop servers even on an Android phone, for example). So, if done correctly, it becomes an interesting platform to develop with.

I have found a lot of confusion on the net though, and because of that I decided to write this post in the hope to help other people who probably know how to program already, but are finding it hard to enter the new bandwagon, because of all the confusion. Most random tutorials out there make it seem that you need to install half the internet to write a Hello World, and that is not true.

What I mean is: NodeJS itself is very simple. Really. What makes it confusing is the plethora of external libraries and helpers that everyone seems to need so desperately. To make a long story really short, you don’t actually *need* anything else (although some of them will indeed help to accelerate development). There seems to exist a phenomenon that exaggerates the argument “not reinvent the wheel”, which already led to absurds like this: 11 Lines Of NodeJS Code Almost Broke The Internet — seriously, that many people don’t know how to left-pad a string and need a package for that? That can be done on a single line, for God’s sake! I could write a lot about that matter, but OK, I won’t dive into that, at least not on this post.

NodeJS is basically Google V8 engine (https://developers.google.com/v8). Google V8 is used on Chrome to run Javascript code on the web. One of the nice things about Google V8 is that it is cross-platform, so it runs on many different Operating Systems.

What NodeJS made was — grossly speaking — compile V8 as a standalone program, and let it execute Javascript from the command-line. You create a Javascript file, then run it with NodeJS. Very grossly speaking, that is basically it. To make it more interesting, it provides some native system access — built-in and through plugins, so Javascript becomes more powerful than the sand-boxed Javascript from browsers. You can read/write local files, open network sockets, and have access to a number of cool native features — but still, cross-platform.

Now, as that made it so easy to write things and modules for it, these days there are thousands and thousands of plugins, libraries (in the form of packages) and code snippets out there, which can really make things too confusing for those who are just coming to the platform — but if you start from the start, the platform itself is very simple.

Again, grossly speaking, it is nothing more than a console application which runs Javascript.

A FIRST EXAMPLE

Before we dive into code examples, please note that I’m not actually teaching how to code. You should either already know Javascript, or have enough programming experience in other languages and client-server architectures to infer the logic from the code posted ahead. This is not a Javascript tutorial, this is a NodeJS architecture tutorial. If you want to learn Javascript, there are many tutorials, but I would suggest you to start here: https://www.google.com/search?q=learn+javascript.

Let’s suppose for example that you wanted Javascript to sum 2+2 and show the result. Traditionally, on a web-browser, that would require you to design a simple web-page in HTML, putting some lines on it, and running it inside a web-browser. Something like this:

<html>
	<head>
		<script>
			function sum(a, b)
			{
				return a + b;
			}
		</script>
	</head>
	<body>

		2 + 2 = <script>document.write(sum(2, 2))</script>

	</body>
</html>

You put the above inside an “index.html” and open it on your browser. And you see “2 + 2 = 4” on the page.

Now, with NodeJS, you don’t need an actual browser. You just create a file with any name, anywhere — let’say: “sum.js” in your current directory — and run it. Something like this:

console.log("2 + 2 = " + sum(2, 2));

function sum(a, b)
{
	return a + b;
}

Then on a console window, run with

node sum.js

And that is essentially the very same thing, except that you did not need a web-browser for that — well, again grossly, NodeJS *is* the browser running the Javascript for you, but on the console. And that is all the essence of NodeJS, really. As simple as it looks.

A BUILT-IN FEATURE EXAMPLE

But then things get more interesting, of course, because it actually provides some built-in and transparent native access for the server-side Javascript (the client-side Javascript which will run on the browser will still be sand-boxed for obvious security reasons, of course) — please check the NodeJS API to know all them; for example, it provides a built-in http module which allows you to quickly create a simple http server in Javascript, in just a few lines:

var http = require('http');
var port = 3000;

// create an http server that answers with "Hello World" when accessed
const server = http.createServer(function (req, res)
{
	res.writeHead(200, {'Content-Type': 'text/plain'});
	res.write('Hello World!');
	res.end();
});

// start listening for connections
server.listen(port, () =>
{
	console.log("Server running on port " + port);
});

Please note that on the above example, I used a built-in feature of NodeJS. You don’t need anything else except NodeJS installed. Just create a file anywhere — let’s say, “http.js” — then type:

node http.js

And you’re done. That is a working “Hello World” web-server, and you can test it by opening http://localhost:3000 on your browser. Very simple I must say, but if you previously searched for some tutorials, you probably found people adding half the internet to do stupidly simple things like that, and that is exactly what makes it look so confusing and unnecessarily complex. As I’m saying, just start from the start, and soon you’ll dominate it.

Here you will notice that you have both server and client already. The server is ran by “node http.js”, the client is ran with a browser, by simply opening the url. Of course you could write more than “Hello World!” as the return from the server — you could certainly write a complex web-page, including Javascript which would then be executed by the browser. Do you see that? The above is like a micro-apache server, let’s say. It runs as a console application, but when you connect to it through a browser, it returns html to the browser. Here the server returned just a string “Hello World!”, but it could return anything to the browser (which is, in fact, the client). You could easily extend the above bare-bones http server example by reading files from the hdd and writing them back to the pipeline, effectively creating a feature complete web-server. Just with NodeJS alone.

This is very important that you visualize correctly what is server and what is client in the example above. In the above case, the server returns things to a client — which does not really need to be a browser, although that is the most used architecture currently. You run a NodeJS server, and you connect to it using a browser.

ADDING PACKAGES WITH NPM

OK, so, as you start to write more and more complex code in Javascript for NodeJS, you will naturally remember that you’ve seen people downloading half the internet to build their applications. Now with that I can partially agree, there are indeed quite a few useful packages out there, which will save you precious development hours. Just use them wisely, and you’ll be fine and productive.

For example, you could replace the above bare-bones http server with something more powerful, and avoid writing a lot of http web-server handling code yourself. NodeJS comes with npm, a Package Manager which have a large database of packages (libraries), including ExpressJS for example, probably the most used http server these days. As I insist to say, though, ExpressJS is *not* a requirement, it’s just a way of accelerating development, but it’s better to first understand that it’s not really a requirement, so things don’t become too confusing so quickly. Not saying that you should, but you *could* very well write something similar yourself. And there are certainly times when it’s better to write your very own lib than to fight with something that will just bring bloat and confusion to your project. Keep that in mind.

Because now we’re going to use external packages downloaded from the internet to help our productivity, you will notice that now we have to actually create a directory for our project, and initialize npm there. Does not look so confusing now, right? Things are starting to make a bit more sense, hopefully. OK, so let’s rewrite our http web-server to use ExpressJS then:

mkdir web-server
cd web-server
npm init
(press enter to all questions, this is just a simple test anyway)
npm install express

You will notice that npm created a “node_modules” directory, with many sub-directories and files in there. Those are dependencies of ExpressJS, downloaded and installed by npm. Oh well, at least now that makes some sense. =)

OK so, in the same directory, create a “server.js” with this:

// bare-bones express web-server

const express = require('express');
const app = express();

var port = 3000;
app.get('/', (req, res) => res.send('Hello World!'));
app.listen(port, () => console.log('Listening on port ' + port));

And run with

node server.js

The server starts listening for http connections on port 3000. Just point your browser to http://localhost:3000 and you’ll see the “Hello World” returned by the ExpressJS web-server. I won’t dive into the additional features of ExpressJS here, please refer to its homepage (https://expressjs.com) to find more. I just wanted to progressively show to you WHY and HOW additional packages are added to a NodeJS project.

Please note that, although we improved the complexity a bit, we did not mention other things like ReactJS and Webpack yet. They are not really *needed* as well, but if you continue improving, you’ll eventually find that they start to make sense. Hmm well, for some cases, of course, not all. My criticism again comes to the inefficiency of adding half the internet to create stupidly simple things which are possibly built-in already, or could be written in just a few lines of code, instead of adding megabytes of code dependencies.

Soon you realize that the possibilities are really huge. You can add a MySQL package, and have a simple console app that access a database, or going further, access a database and then send the query results down the web pipeline through the web-server. Add a UI package and have beautiful client rendering of those database queries served through the web. And so on.

Let’s create a simple console app that lists people’s names by querying MySQL. This first MySQL example won’t have a web-server, it’ll just list the query results on console. We will use the MySQL package for that.

mkdir mysql-console
cd mysql-console
npm init
(press enter until npm finishes initialization)
npm install mysql

Now we will assume that we have a database “node_tut” with a table “people” and this table has only “id” and “name” fields, which we want to list on console. If you need to learn MySQL, please head to https://www.google.com/search?q=learn+mysql – as with everything else, you’ll find many free resources to study.

Create a file called “mysql-console.js” on the project’s root directory:

// table is: id and name

const mysql = require('mysql');

var con = mysql.createConnection
(
	{
		host: "localhost",
		user: "some_user",
		password: "some_password",
		database: "node_tut"
	}
);

con.connect(
	function(err)
	{
		if (err) throw err;
		console.log("Connected!");

		console.log("Querying...");
		var query = con.query("select id,name from people");

		query.on('error', function(err)
		{
			throw err;
		});

		query.on('result', function(row)
		{
			console.log("row: " + row.id + " - " + row.name);
		});

		con.end();
		console.log("Done.");
	}
);

Run it with

node mysql-console.js

And you should immediately see a list of people that are registered on that fictional database. Directly on the console.

Now, we want to go further and access that people’s list from the web. First, let’s add ExpressJS to the packages list of that application:

npm install express

ExpressJS should be available in our code now, so let’s add support for it — edit the file mysql-console.js and replace the code with this:

// table is: id and name

const express = require('express')
const mysql = require('mysql')
const app = express()

var port = 3000;


// queries de db and sends the result down the web
function list(res)
{
	var list = "";

	var con = mysql.createConnection
	(
		{
			host: "localhost",
			user: "some_user",
			password: "some_password",
			database: "node_tut"
		}
	);

	con.connect(
		function(err)
		{
			if (err) throw err;
			console.log("Connected!");

			console.log("Querying...");
			list = "<h3>PEOPLE:</h3><br>";

			var query = con.query("select id,name from people", (err, rows) =>
			{
				if (err) throw err;

				rows.forEach( (row) =>
				{
					list += row.id + " - " + row.name + "<br>";
				});

				res.send(list);
			});

			con.end();

			console.log("Done.");
		}
	);
}

// express web server
app.get('/', (req, res) => res.send("<a href='/list'>List People</a>"));
app.get('/list', (req, res) => list(res));
app.listen(port, () => console.log('Example app listening on port ' + port));

Run it again with

node mysql-console.js

Note that although the name is the same, this is now a web-server application. It does not list people in the console anymore, but on the web. The web-server will actually return a link when the homepage “/” is accessed, and the link will get another route (as it’s commonly called these days) “/list”, and that “/list” route will call the list() function to query the database, and return the list of people as html. Access it with http://localhost:3000 and see that you can see the people from the database on a browser.

Of course, that is all bare-bones, but it hopefully shows the concept in full, with minimal code and installed packages, while keeping clear distinction of parts.

CONCLUSION

I think that this post is too big already, so I’ll stop here for now. I will probably make another post in the near future, demystifying ReactJS and Webpack. I really want to write a continuation talking about them. Later, when time permits.

To get a solid initial grip on NodeJS, and expand on it, I suggest that you start by visiting its API documentation: https://nodejs.org/dist/latest-v9.x/docs/api — please note that this URL will certainly change over time, when newer versions arrive. Maybe you will prefer to simply start from the main site: https://nodejs.org and find the API docs from there.

After you get a solid understanding of the basis and built-in features, then you’ll be much less confused by the myriad of available packages. Start small and grow solid, it’s not as hard as it seems.

I hope that this tutorial was useful in some way. Thanks for reading, and good luck.


Ritmo: VR Rhythm Game for Oculus Rift

Here is a new game skeleton of a VR game for Oculus Rift — and possibly Vive in the near future. I just wanted something fun and quick to develop, as a test bed for my new VR interface lib.

The player must hit objects that come into his direction in synch with the playing music. That results in a good physical exercise, really. And that is basically it.

It of course needs more work — rich and colorful environments, more effects, rewards, statistics, leader boards etc — but the bare-bones game works already, as shown in the above video.

More about this later on, thanks.


Solving Ubuntu stuck on Login Screen

After a simple apt-get update & apt-get upgrade, next time I booted Ubuntu 16.04LTS it was stuck on a login loop, not allowing me to enter the system normally. Searching on Google I found out that many people have had the same problem. I then tried almost everything that was suggested (except for some extremely risky ones which would not work anyway), but nothing fixed the problem for me.

In desperation I tried one thing that ended up working well, and I want to share my solution with you. First, at boot time (on that initial Grub boot selection screen), I chose the previous kernel to boot with. It booted normally, and I could now login to the system again. Then I went to http://kernel.ubuntu.com/~kernel-ppa/mainline and from the list found the latest Kernel available (it was v4.15.10 at this time), then downloaded these files:

linux-headers-4.15.10-041510_4.15.10-041510.201803152130_all.deb
linux-headers-4.15.10-041510-generic_4.15.10-041510.201803152130_amd64.deb
linux-image-4.15.10-041510-generic_4.15.10-041510.201803152130_amd64.deb

Note that those were two linux-headers (all and amd64, as I’m using 64bit Linux), plus the linux-image, all for the same 4.15.10 Kernel version. You may prefer to choose the low-latency Kernel but they’re really meant for some specific use cases of Linux, so I went with the standard generic one.

After getting the three files, I installed all them at once by typing inside their download directory:

sudo dpkg -i *.deb

Then rebooted. I don’t really know what exactly caused the login problem, or if what I did will solve that for everyone, but for me that fixed the login problem and I also ended up with the latest stable Kernel, while keeping my system intact. I don’t recommend blindly following most crazy suggestions you find out there (uninstalling parts of the system or installing more and more random packages). I believe that upgrading the Kernel was straightforward, did not add or remove any packages, and was safe because if it did not work, I could simply select another older Kernel at boot, and try something else again.

Good luck.


Quickly Install Samba on Raspberry (or any Linux, that is)

So, I wanted to quickly put some USB HDD shared on my network using a Raspberry Pi. This is something simple, but I wasted a bit of extra time to get working this time, so I’m posting here just in case I forget again in the future, or someone comes looking for the same quick solution. On the Raspberry:

If you don’t have nano editor installed, install it first (or skip this if you have it already):

sudo apt-get install nano

Now install and immediately after open smb.conf (the Samba configuration file) for editing:

sudo apt-get install samba samba-common-bin
sudo nano /etc/samba/smb.conf

Put the following lines in the end of smb.conf (only things you really *need* to customize are the path in the second part and maybe the workgroup if your network workgroup is not the default Windows WORKGROUP):

[global]
  workgroup = WORKGROUP
  wins support = yes
  netbios name = Raspberry
  server string =
  domain master = no
  local master = yes
  preferred master = yes
  os level = 35
  security = user

[public]
  comment = Public
  path = /mnt/media01/Public
  public = yes
  writable = yes
  create mask = 0777
  directory mask = 0777

Remember that the path above must be changed to the actual path of the mounted USB HDD.

Save (CTRL+O then ENTER to save, CTRL+X to leave nano) and then restart Samba:

sudo /etc/init.d/samba reload

Now Windows Explorer should see the shared folder on the network. Please note that this is not set for high security. I don’t have strangers accessing my Wifi network, so I’m not too paranoid with that. If you need stronger security than the quickie above, please look elsewhere.


Real-Time Brain Wave Analyzer

The EEG (electroencephalogram) is a neurological test which can reveal abnormalities in people’s brain waves. The EEG device is traditionally found only in medical facilities. Most people will take an EEG test at least once on their lives. EEG devices have a few dozens of electrical sensors which can read brain activity and record those activities for later analysis.

Traditional EEG device

Some years ago, a few portable, consumer oriented EEG devices have appeared, one of them being the Emotiv Epoc, a 14 channels wireless EEG device which, although not comparable to an industrial EEG, also allows for some interesting brain wave experiments and visualizations. Interesting enough, differently from the traditional EEG devices which will only record the brain waves for medical analysis of brain health, the portable device also provides some basic facilities for coarse “mind reading”, that is, through some clever real-time analysis of user’s brain activities, it can most of the time, with some effort, detect a few limited “thoughts” like push, pull and move. So, the user can (again, in a very limited way) effectively control the computer with his mind. It even provides an SDK for advanced users and programmers to develop their own applications.

That is all cool, but that was not really what I was looking for, though. I wanted more low-level device access, direct to the metal, raw sensors reading for research purposes. I posted a new Youtube video showing the first prototype of a real-time brain wave analyzer that I just started to develop for personal AI research purposes.

At the time of recording, I was wearing an Emotiv Epoc and the waves were from real, raw EEG data being read from my own brain. I used a hacked low-level driver (on Linux) to get complete access to the raw sensors of the device, instead of using its built-in software which provides limited access to its sensor readings. The hacked driver was not written by me though — when searching for low-level Epoc protocol info, I found Emokit-c which already opened the full access that I wanted. So, from there I connected the device data stream to the 3D engine and made the first prototype over the weekend.

For now the prototype is rough yet, just showing raw waves with no further processing. In the near future, I plan to have a Neural Network connected to this raw EEG analyzer, learning patterns from thoughts and emotions, and doing more useful (possibly serious, medical related) things.

Although the prototype seems to be just a graphics demo, as said above it’s not just fancy rendering, it’s in reality talking to a real device and getting real raw EEG data from its sensors, which will later on be processed by a Neural Network with serious intents.

More about this in the future, when time permits.


Cross-Platform Neural Network Library

I have been spending some time sharpening my skills on Artificial Intelligence again. I have been around https://www.tensorflow.org and, although it’s nice, powerful and useful, I still like very much to write my own code and completely understand and dominate the object of study, so that is what I did recently — a personal neural network framework entirely written from scratch. I struggled a little bit with the different back-propagation gradient formulas, but after dominating those details I am satisfied with the current results. The acquired knowledge helps me to better understand bigger frameworks like Tensorflow, for example.

This tiny unnamed Neural Network library of mine is cross-platform, compatible with basically all hardware platforms and Operating Systems, and still small and with no external dependencies at all. It is fully self-contained. I like that, because deployment is very easy, and I can easily integrate it in any app, on desktop, mobile and embedded platforms in a matter of minutes.

The following simple video shows basic learning and recognition of digits. I ran it inside Unity3D because of its easiness for visual prototyping, but as said, the NN library itself has no dependencies, so it’s not tied to Unity or any other engines or libraries.

I will be constantly adding features to this personal lib — it’s not just for digits recognition! — and I intend to have it running on an intelligent robot which is going to entertain the family for a long time. =)

More on this later, thanks for reading.

 

 


Mobile Game and Neural Networks

At this moment I am only working on Etherea² on my spare time, at weekends. In normal work days I am programming a new (and completely unrelated) commercial mobile game and I’m honestly happy to have a paid work, so things keep going nicely. The game is relatively simple and nice — a real-time strategy game — but I won’t give any further details as it is not my property and I normally don’t comment on commercial work.

But then, back to weekends programming, I’ve been creating my own Deep Neural Network library. It is not tied or dependent on any particular engine, so it can be used both on database-related systems (which I worked in the past and have been contemplating again) and VR/Games.

Neural Networks are awesome, as they try to emulate how real neurons work. The artificial ones also have dentrites (which I resume to input/output ports) and synapses (which I resume to connections between different neurons), and from that simple structure, some really interesting things can be created. As you probably know already, they can be used in many different domains: vision, voice and general cognition are some of them.

For now, my implementation is all CPU based. When it becomes rock solid and fully featured, I will consider moving some or maybe all parts to a GPU implementation, using Compute Shaders. Right now I’m still satisfied with CPU performance, so no urgency on GPU translation yet.

The following screenshot is taken from a small network visualized in Unity3D — so I can quickly test/confirm that the actual topologies and synapses are being created correctly.

The above network is a Feedforward one, showing an extremely simple (but correct and useful) neural net in this case, however the underlying structure can automatically build and interconnect a neural net of any size (only limited by memory), using either Perceptrons or Sigmoid neurons.

As for usage, I do have a list of plans for it. One of them is for Etherea² life and behaviors simulation, the other is a robot which is now waiting for a second iteration of development.

More on this later. Thanks for reading.


Reality Shock

That is right. After almost a week on Indiegogo, Etherea² has only about 130 visits and one brave backer, so in practice it is still unknown to the world, lost in the void. It did not generate any traction. A too premature campaign without any marketing at all, that is the problem.

I still believe in it as commercially viable, though. What it needed was a small pre-investment so that it could be properly developed until reaching the Minimal Viable Product (MVP), when it would be able to generate traction — before starting a crowdfunding campaign, that is.

Anyway, I will continue working in the project, using my spare time — as always — regardless of funding. I just love this project, and I want it completed even if it becomes a game for me to play with my kids — while also teaching programming to them. =)

Thanks for reading.

Vegetation in Etherea²


Dev Log opened

Hi there. So, after a few years, I just decided to reopen a public log. Here I’ll be posting about my personal progress in general, mostly about programming and robotics, and some times about real-life subjects. Welcome and feel free to leave me a comment. Thanks.

Work on Etherea²

Etherea² for Unity3D can potentially handle an universe with trillions of trillions of trillions of kilometers, with 0.1mm accuracy everywhere, while Unity3D (and almost all the current 3D engines out there) can only handle scenes of about 64000 meters out of the box. That size limit can’t fit a single Etherea² planet, much less its virtually infinite number of planets. So, Etherea² effectively extends Unity3D limits to handle more than the size of the observable universe.

But it does not stop there, the size of the universe is just one of the many challenges that Etherea² solves. It can also render a 1cm ant very close to the camera, at the same time that it renders a 13000km wide planet immediately behind it, plus another planet a million kilometers behind both, without starting a z-buffer fight. Plus you can leave something here on this planet, fly to a different star system light-years away, explore another planet, then come back to the first and find the same object at the same exact place, without any interruptions or loading screens. Also you don’t need a super computer for that, in fact you could be doing that using your Android phone — yes, at a reduced resolution and quality compared to the desktop version, but it does run on mobiles.

Just to remember, here is a reference video from the first ever Etherea¹. This old 64kb techdemo was available for download in 2010-2011, and was entirely done in C++ and OpenGL:

Out of curiosity, even with texture and geometry compression, a single planet done by hand would take something like 4gb of disk space. But Etherea¹ was 100% procedural, and the entire tech-demo with 8 full planets and background music used only 64kb of disk space.

Anyway, I ended up stopping all work on Etherea¹ (all versions, including the Unity3D one) around 2013. Daily job sucked me entirely and I could not handle the side work on Etherea. It was then frozen for a few years.

I am working on Etherea² on and off for a few months now. Initially I was writing it in pure Javascript/WebGL² ( demo here: https://imersiva.com/demos/eo ), when I decided to jump to Unity3D again, because it’s currently the most used 3D engine in the world, which eases adoption. As I plan to have it adopted by other programmers and teams, that was a good choice, I guess.

The entire “Etherea” thing is not just a plain game. I want it to become an open virtual reality platform where people can easily populate with their own content, and even create new universes — and games that run directly in those universes — by building, scripting and exploring their own and others creations. I would like to eventually visit a distant space station and find a cinema built by someone, where there are movies to watch. Or find a race track on a planet’s surface, where people meet to race for the fun of it, etc. All that seamlessly streamed, without “loading…” screens or simulation interruptions. Ambitious, I know, but that is to be implemented gradually, in iteration layers, not all at once.

I am still working on the building tools, which are in part heavily inspired by how the Cube2 ( http://sauerbraten.org ) handles that. The building tools will also allow normal polygonal models to be imported (initially only .obj format, maybe other formats later on). There will also be internal C# and even Shader creation support, so it’ll be possible for more advanced users to create new procedural object types — a huge fractal structure floating in space, for example — and spread those new object types throughout the universe. I am sure that there are many creative people who can populate the huge space with some really unexpected things.

Here is a short, preliminar Etherea² video:

I want it to become open-source, but I also need money to help me push it to completion. For now, while looking for an investor and/or partner, I am trying to crowdfund it through Indiegogo:

https://www.indiegogo.com/projects/etherea-open-virtual-reality-universe/x/2642555

Until the date of this post, the campaign had almost no visibility yet. The page is just sitting there, with only a single brave backer (who I thank so much), and practically no daily visits. Lets hope the situation changes and it ends up being successfully funded and finally opened on its entirety to all the backers.

 

Thanks for reading.