I'm an ops person a year into business, ask me anything.
Ask me technical questions, ask me to design a theoretical system, ask me things about my career.…
This will be my permanent AMA for anything that's technical.
I'm an ops person a year into business, ask me anything.
Ask me technical questions, ask me to design a theoretical system, ask me things about my career.…
This will be my permanent AMA for anything that's technical.
For further actions, you may consider blocking this person and/or reporting abuse
Leandro Veiga -
Ushakov Michael -
MD ARIFUL HAQUE -
Ramya Sri M -
Top comments (13)
ok, why is it when I press the thinking emoji it goes down in numbers? Btw, as a noob I have lots of questions and little in ways to express them. I think I will stick to my book "learning python the hardway" for now.
I recommend looking at mechanisms that the underlying system does already offer. There is lots to learn about file descriptors and their power. Then there is the whole topic of IPC. As soon as you know how to effectively stick together things like
fork()
,mkfifo()
, pipes and such, you'll naturally split things into small processes communicating with each other. This is part of the Unix philosophy and makes things a little more maintainable.Apart from that, if you're into maintaining applications, I recommend reading man-pages about the most important syscalls (
read()
,write()
andopen()
are a great thing to start at) andstrace
a few small things likecat file
to see what exactly happens on the system. This enables you a whole new world of understanding how the stack below works and what to avoid in your application (e.g. not putting vital assets in a distributed filesystem). If you're done with that, you can hopefully work effectively with servers in an ops way.One example would be: oh, the system is kinda down, what happened, while there's an
rm -rf
running on a large deeply nested cache directory. The not-so obvious first step would beSIGSTOP
-ing therm
so you buy yourself some time. Then you could e.g. justmv
the directory somewhere else and at the other place delete files slowly (25 files per second is usually not noticable).If you, after that, write an application, you might want to get some broad overview of tech that's out there that you can just plug in between your components to get much more performance.
Examples include: Varnish, nginx for TLS offloading, *SQL replication and reading from slaves, Maxscale to enhance the read-thing.…
Firstly, can you read minds. I was going to ask about config and ini files. As when ever I go on git hub I see .py and all this other stuff and frankly don't have any idea what to do with it.
Ok, firstly Thank you, I really appreciate that you took the time to respond. Secondly, that scenario sounded pretty cool =). Can you imagine it what it will be like working with the new AI chip... Anyway, I am getting off topic. Ok, I'll go from the top and get to it. By underlying system mechanism as in how software works with hardware? I learnt somewhere that RAM is an array and cache holds the last used memory pointer (I think) to the RAM. Hmmm...think I'll start here bottomupcs.com/file_descriptors.xhtml
tldp.org/LDP/lpg/node18.html found this. Which I think will be useful. I'll also check out man pages
Btw,Do you have any good reference books or web pages for learning?
I have no idea.
The second one, I didn't expect you to know. Silly question really. =/ Purely hoping that someone at DEV.to might read the post.
If by systems documentation you mean "documenting a setup, how it works, what each part/server does and so on", read on.
Assuming you mean "done well":
Building conventions is a great way to avoid the whole thing altogether.
Not that I particularly hate documentation, but if you're taught to build a simplistic demo-setup and can immediately recognise that pattern all over the infrastructure and work with/adapt it, that's a vital way to work.
If there's something new, a yet-to-be best practice or some old practice, then you need a place to stick that.
A large database that keeps metadata about machines (name: note to be confused with the hostname, patch information, arbitrary tags and such) is usually a good thing for stuff that needs to be documented for every single machine.
For everything out-of-place I found some sort of Wiki with a place that contains all of the specials quite neat.
We have a very nice document called a QA checklist.
After building a setup we check against that to see whether everything's there.
The QA is done by someone not involved in the setup process so that everything fishy/undocumented immediately pops out and can be documented.
Sorry, still caught up with the Cthulhu Mythos Tales.
Hi Ben! Noticed that you self-learnt Distributed Systems, Can you kindly point to resources you used to do that? Thanks!
That's a tough question ;-)
There's good and bad news for you.
The bad news is: I didn't really self-learn it.
I've been using
SSH
to connect to my server and netcat to copy files across the LAN from my to my roommates PC long before encountering real distributed systems.When I did encounter them they seemed very natural to me so I didn't have to look up much at all.
The rest I learned from my then-colleagues and casually talking to people.
The good news is however, it's all pretty easy if you look at it neither from a top-level perspective nor from a lowest-level one, but from somewhere in-between.
Example
Let's take for example a (pretty common) example: a distributed shopsystem.
So what's the opposite of a distributed system?
A single monolithic one, right. Let's transform one into another.
Single Server
Luckily our example-shopsystem is (not advertising anything here) Magento, Oxid, or Shopware.
All of those are written in PHP, so they run pretty much out of the box. We just setup an Apache webserver (for simplicity and compatibility's sake) with mod_php. It really just boils down to installing and telling it where the docroot is.
Database
The installer is up and running and we're navigating the configuration menu.
The installer asks you for database credentials.
What do you do?
So, we wanted to setup a distributed setup, right?
We're going to need a central database, because the data's supposed to be the same everywhere, obviously.
We spin up another server, install MySQL, create a user with permission to connect from the PHP-server.
This can bear some problems if you can't trust your network, i.e. if the servers are communicating over a public wire. Because then we would send all the data in plaintext over the internet, we'd need some encryption in place for that. There's solutions for that; VPNs (OpenVPN, tinc), encrypted connections, middleware (I think MaxScale as a local installation speaking TLS in the backend would work(and aside make failovering easier)), so let's not worry about that and just say in our example the wire is secure.
We can now tell our PHP-server to connect and start doing things.
Going Distributed
So, we want two of these servers, right?
There arises a problem: most systems have things like cronjobs or admin-interfaces, file-uploads, etc., which should only be triggered on a single server.
But we also want two servers so one of them can break.
Let's have a single authoritative server, the one we're already using.
Then we setup two other servers which we may copy files to from the other server (rsync over SSH comes to mind), and which too run Apache&mod_php.
Load Balancing
Now how we teach our browser to talk to all of these three servers?
We don't!
We create another server.
That server will be responsible for distributing the requests. All servers involved speak HTTP so it boils down to forwarding the requests. This is good, let's use something that does exactly that.
Nginx is a good choice for that.
So the nginx is "terminating" the connection, meaning that clients connect to it.
There's several things which are pretty easy then:
/admin
(or whereever the admin interface is) to the authoritative serverOne thing that we want to do is route all requests for the admin interface to the admin.
After we've done that we can make changes and upload pictures to the admin server and then sync it's docroot to the application servers and be done with it.
Further Things To Do
Use memcached or Redis for sessions.
Setup a database slave for failover.
Put varnish in-between the load-balancer and the backends.
Final Words
Having a distributed system is pretty easy, if:
I am a beginner. Where should I start ?
I have a raspberry pi, what would be a way of finding it connected via ethernet to home router/modem and how can I transfer files?
Use a static IP, although I don't particularly like that method.
Or query your DHCP server (usually your router). Most of the time it's possible using names too.
If you setup your own DHCP server you can also do one of the following:
If that isn't possible or not wanted, you can boot it up at a screen and run
ifconfig
, there you'll see the MAC address (e.g.ether 01:24:de:ad:be:ef
). Note that.You can then (assuming you are on the same network when it does some network operations, like renewing the DHCP lease) use things like arp (Address Resolution Protocol; the protocol making a MAC from an IP) or rarp (Reverse ARP).
Copying files can then be done easily using SSH/SFTP (using e.g. scp), or, if it doesn't need to be encrypted, netcat.
Netcat would be something like
nc -q1 -lp 1337 >file
on the RPi andnc -q1 $IP_OF_RPI 1337 <file
locally.