@tvaughan I am into a lot of DevOps… so I am really interested in your ideas
would have a registry/repo for your docker volumes, solve some big problem for you?
you know what you could do? use rsync to a remote drive
you can rsync volumes, remotely, across the wire
When you run a docker image, any data that is created is lost when the container stops. For example, if you had postgres running in a container, everything that you saved to postgres would be lost when the container stops. A volume is a way to persist data. For example, you would say "mount this volume inside the postgres container at /var/lib/postres." This is how you persist data using containers
Currently I know of no good way to backup a volume, other that copying the files to somewhere like a barbarian
rsync
I actually had to use rsync on docker for a while… because the native docker sync was not working
Yes, I know. I would like to just push a snapshot of the volume somewhere, just as easily as I can with docker images
yes, you can do that quite easily
it will mirror it for you, in real time
rsync uses SSH protocal
I found it really fast actually
I use rsync a lot. It's a great tool
rsync uses rsh by default I think. You have to tell rsync to use ssh, like rsync --rsh=ssh
I guess that leads me to this question though
are you storing PostGres on the same docker containers?
are you putting several pieces of your stack into 1 container?
with the volumes
or… are you connecting up to a PostGre cluster?
on AWS
Each container should be one and only one service. Possibly multiple ports, but I don't like the idea of running more than one service in a container
agreed
I would only put everything together, for local dev
to make it easier
well, I mean 1 config, but different images
docker-compose
I never meet a DevOps Clojurian before… super cool
I don't use docker-compose. I think docker is a terrible service manager. I just use a makefile and run each container in a separate tmux pane
yea, you know… I had a ton of problems with docker-compose
I had to debug the crap out of it
across Linux, OSX and Win
I am getting ready to dive into Kubernetes
because I want to mirror my local dev, to staging, to production… we call this “representational development” meaning… you build on what the production server is
Yes, development and production should match as much as possible
yes. because it limits the edge cases, for problems
hahahaha. ok. I am going to go.. talk the entire team into using Ubuntu
lmao hahahahaa
Fedora Silverblue! 🙂
😂
ahhh, you are Red Hat, centOS?
you must be doing more enterprise, or security sensitive work
I have not used Red Hat since the banks
No, but Silverblue is an immutable OS. I'm partial to the concept
a what? woah
immutable OS? hmmm. enlighten me, Red Hat Master
Kinda like CoreOS was, if you're familiar with that
• BTRFS as the default file system
which means that you don’t upgrade packages: you just create a new version of the operating system with the new packages
so, you can revert the system back?
I am going to look into this, news to me
Once you’ve taken new versions of packages, you just reboot into the new version, and you’re good to go. If there’s a problem with it, you can revert to the old version. I like this for stability, but also for security.
OMG Wow
Right
Ok, I am sold… Installing now
forget VMWare
🙂
especially with Python V2 to V3
I foo bared the Python install before
ok, take a break… this was the most fascinating conversation the entire week
Yeah, so don't install python onto the os. Use a toolbox https://docs.fedoraproject.org/en-US/fedora-silverblue/toolbox/
this is awesome
omg, so awesome
you use linkedin?
I might need to call on you, for some high profile jobs… DevOps always seems to be difficult to find help
DM'd