Home Forum Software more than one central-server?

Viewing 15 posts - 1 through 15 (of 23 total)
  • Author
  • #689


    Radhoo will open a local webserver on urad-devices “to break the dependency to the central server”.

    What about serving the data to more than one internet-server.
    From devices software and users view, it should be no problem, to contact 3 servers with a short http-timeout of 5 sec.

    From the single servers view its also no problem, it’s just the load Radhoo sees right now.

    1) But synchronisation between the servers is to be considered!
    2) Should these servers have the website only as backup and normally redirect to uradmonitor.com?
    3) Could working distributed systems (like twitter, dropbox, ..) be used to store or share the data?



    At the moment it is possible to take down the whole urad-network, by “boarding/capturing/breaking” the uradmonitor.com Server.
    The server has to be recovered, potentially a new DNS-entry (IP) has to be shipped and if so, two days later, every device on the world has to be manually rebooted.

    This also will happen, if uradmonitor’s provider decides to change its IP;
    or if Radhoo decides to move the server.

    It is also possible to take down the whole urad-network, by just virtually breaking/stealing or seizing (law, court) it’s DNS entry:
    As soon as devices reboot, they will be lost and need to be manually reprogrammed to a new domain.
    (This, stealing of domain, already happened to me with a DENIC domain.)

    In the meantime please read again paragraph 2. We HAVE to address this.
    If somehow the IP of uradmonitor.com changes (not unusual),
    1) a new DNS-entry has to be shipped (1 to 2 days)
    2) and every device on the world has to be manually rebooted.

    Radu, I don’t know the code on the devices, correct me if there is a timespan greater my testing time.
    But I believe you should do a new dhcp- and dns-lookup (or automatic reboot) after a time of connection-errors.
    (Can’t test this over too long, because my device will be down then.)


    These are all valid points, Vinz, and we should see how to secure the central server dependancy.

    But at least for the IP, things are not that bad:
    – the units use DNS to resolve data.uradmonitor.com to latest IP
    – if communication fails, the units will automatically reboot, part of a watchdog mechanism, so at least this part is covered.

    As you said it is a good time to start thinking about a way “to break the dependency to the central server”, while keeping the solution easy to use (plug and play)


    thank’s for your answer.

    I saw the unit only doing a DNS lookup after reboot.

    How long is the watchdog’s time? My device did not reboot while testing a connection loss over some minutes.


    My pleasure. The “connection-watchdog” is set to 5 minutes.


    Hi guys,

    Yes, I agree with you both – the decentralisation of the single server model shshould be in our top 10 priorities. To defend against everything from hardware failure to malicious attacks.

    How about if duplicate servers were hosted by trusted volunteers – the units could send data to the primary server until it becomes unreachable, then would send to a secondary, then the tertiary server if the secondary fails, etc. We’d have to write some form of data replication (as close to real-time as possible) to keep all the servers in sync with each other – so when primary comes back online it receives the data collected by the secondary, etc. There are definitely better solutions, but I do it this way for a cluster of servers at my work, and it definitely works.

    • This reply was modified 8 years, 2 months ago by Ally.

    uups 🙂
    I very much apologize for the trouble concerning change of servers IP with same DNS.
    I confirm the “connection-watchdog” is set to 5 minutes.

    Now I did several 20min tests.
    You are absolute right; I confirm the correct reboot and reallocation of DHCP and DNS. First reboot is ready up to 7 min after server-down because there is 1min recognizing the loss and 1min reboot. Then the interval is exact 5min.

    As I wrote, I did not see this because my test was (a little) to short, so I assumed the worst.

    I think the other points are still valid.


    You’re right Ally,

    this afternoon I thought about extra option, which I’d like to bring in:

    What if the primary uradmonitor-url is fixed and second an third could be manually set (by the user over local webserver) or by requesting an extra information on uradmonitor once a day.

    1) Fixed uradmonitor.com
    2) Variable request from uradmonitor.com (one of the volunteers you said)
    3) Also variable request from uradmonitor.com
    4) Variable in users choice (don’t know, if necessary and whished by you).

    The synchronisation bings the need of transmitting also the timestamp.
    In my personal implementation this is optional possible at the moment. But with this you can absolutely disturb data in the database.
    Currently you only can disturb current values.


    So if I understand it correctly, this could be packed into the following scenario:
    1) By default program the units with a default mac, a default ID and uradmonitor.com as single server
    2) When the unit connects, it would download a freshly allocated mac/device ID and a list of alternative servers and store the data in its EEPROM.
    3) Periodically, the unit would check if other alternative servers are available (and delete those that are offline).

    Would this work? Does it cover everything, enough to make this decentralisation stand?

    I believe the logic will require only a minimal code, that can fit in the available flash memory. The only issue is I find multiple transmissions of the same data to multiple servers a bit redundant (not a big issue though).


    I think your scenario works, Radu. However, I think the unit would still have to connect to a single point to retrieve the list of alternate servers, is that correct?

    In Vinz’s suggestion, there’s the option to point your unit at another server manually, so if the unit can’t retrieve the list from the central server, you could get a list of alternatives from Facebook, Twitter, email, etc etc and manually point at it. That sounds pretty useful.

    I don’t know how to deal with the initial allocation of the mac/device id in that scenario though.


    So.. thinking about how to achieve global replication we need to know what DB underpins the master server.

    Once we know that, we will begin to understand what’s possible.

    So the big questions are …

    What is the DB ?

    I suspect its MySQL

    How much space per record is consumed ?

    Have you already extrapolated the db space required for the deployment of say a 1000 units worth of data for a year ?

    If it is MySQL then replication can be done securely in many ways.



    • This reply was modified 8 years, 2 months ago by TSL.

    Hi Tim,

    Yes, you’re correct, the db is MySQL. There are currently two tables – Radu explains;

    t1) devices
    Has a single row / uRADMonitor detector, the primary key is the device id. It holds only the latest readings, the 24h average for radiation (as CPM) , the detected location, the overridden location , country code and offline/online status (considered offline if no data received for more than 10minutes).
    As we speak this has 116rows and uses a little over 12KB.

    t2) uradmonitor
    This now has 7,4 million rows and a size of 432MB.
    Each unit in the network sends data via a HTTP Get call to the data.uradmonitor.com . The parameters include the unit ID, measurements and a CRC. All this data gets into the database.
    So each minute, one station send aprox. 58Bytes of data to the server.
    The problem is we got quickly from one unit to ten, and we are now approaching 100 units.

    When I mentioned replication I was thinking of doing it at the software level, some sort of ETL task to shuffle the data around and keep all the server/db’s in sync – but Radu suggested (I think) that the units themselves send their data to multiple servers. There would be some duplication of network traffic, but definitely a cleaner option.


    Hi ally,

    The only problem with sending to multiple servers is that if a path is down between the client and a server, the data between servers will get out of sync because the client wont be able to update them all – so you would still need a method by which to keep the other servers up-to-date.

    I’ve use this before…


    to keep MySQL backed RADIUS servers in sync in a multi-master config.




    Ah, good point – you’re absolute right. I’d overlooked that; some way for the offline server to ‘catch up’ with the online ones.


    There is nice trick that DNS has:
    You can simply create multiple DNS A records for same data domain with different ip addresses and client will round robbin across until working one is located.

    Here is good example with google DNS records: https://och.re/x1hICjzW6VmXd1O2UlXWBdqEvqcW9OGR/file.txt

Viewing 15 posts - 1 through 15 (of 23 total)
  • You must be logged in to reply to this topic.