Blog
Wrapping jdupes to aid in finding duplicate files.
As a way to find duplicate files on your system, jdupes is pretty good. However jdupes lacks (at least) the following things:
- The ability to sort by file size (which is odd, given the task),
- No ability to set a lower limit on the file sizes to match, (this results in piles of tiny matched files in the unsorted list),
- No ability to see file sizes in human-readable form.
The following jdupes.sh is a small bash script, that attempts to help with those points.
figure — jdupes.sh bash script
Let's go over a few lines of this script:
- line 3: The first if statement just displays usage information if no parms are supplied.
- line 11: We are Not passing jdupes the -S parameter to show file size. We will get the size using a call to du later in the script (line 15) so that the size can be shown on the same line as the filename in the results. This allows us to sort the results by size. The stock jdupes command places the size above the filename line, so it can't be sorted.
- line 12: The
IFS=
portion is to retain leading and trailing spaces in each line that is pumped through the "|" filter from jdupes. The-r
after read treats backslash "\" … as might be found in Windows paths … as a normal character. - lines 15-17: The size of the file is retrieved with du.
set
is then used to break the variable sz into separate $1, $2 ... $n variables, with $1 holding the actual size. - Line 18 checks to see if the size is over a limit, so that we can avoid hundreds (thousands?) of matches on small files. The current limit is 100MB (100000). Set this however low that you wish to.
- Line 19: Finally, we echo out the size followed by the filename; to be sorted by size on line 24.
or click for file.
#!/bin/bashif [[ -z "$@" ]]; then echo "Nothing to do!" echo "Usage: jdupes.sh [device label or folder 1] [2] ... [n]" echo "Enclose names including blanks within double quotes." echo "e.g. $ jdupes.sh /media/user/MX500-2TB . "/m/f/Seagate Backup"" echo " will process 2 of User's drives and the current folder (".")." exit 1 fi jdupes -r "$@" | { while IFS= read -r file; do if [[ ! -z "$file" ]]; then if [[ ! "$file" == "bytes each" ]]; then sz=$(du "$file") set -- $sz if [[ $1 -gt 100000 ]]; then echo $(du "$file")
fi fi fi
done } | sort -n > /home/user/myjdups_sorted.txtecho
echo "RESULTS"
tail -n 250 /home/user/myjdups_sorted.txt
Save this to your choice of filename.
Mark the resulting file as executable with
sudo chmod+x <your choice of filename>
.
filesystem
NTFS (Windows formatted drive) 701384 General Class - Session 1 (2023-11-02).mp4
EXT (Linux formatted drive) 701388 General Class 1 (2023-11-02).mp4
See also, Recovering Space Recovering Space, Part 2 Recovering Space, Part 3: jdupes.sh (this file)
Additional things to check for, when trying to free-up disk space on Linux.
This post builds on our prior "Recovering Space" post, which can be found (here).
- Clean Log File Folder
- Trim Email Logs
- Pare Fail2ban Database
- Remove Archive (.deb) Files
- Run QDirStat, ncdu
- First, we'll add a couple of CLI commands for our previous step 4, "Archive (.deb) file cleanup".
Here is how to clear the deb cache (container packages of installed or upgraded apps) and delete everything from /var/cache/apt/archives/ where they are kept.
102K Feb 25 2022 libsasl2-modules_2.1.27+dfsg-2.1+deb11u1_amd64.deb 68K Feb 25 2022 libsasl2-modules-db_2.1.27+dfsg-2.1+deb11u1_amd64.deb 48K Dec 11 2021 libseccomp2_2.5.1-1+deb11u1_amd64.deb 154K Jan 22 2022 libsmartcols1_2.36.1-8+deb11u1_amd64.deb 1.5M Mar 14 2022 libssl1.1_1.1.1k-1+deb11u2_amd64.deb 1.5M Mar 19 2022 libssl1.1_1.1.1n-0+deb11u1_amd64.deb 1.5M May 10 2022 libssl1.1_1.1.1n-0+deb11u2_amd64.deb 1.5M Jun 25 2022 libssl1.1_1.1.1n-0+deb11u3_amd64.deb 1.5M Feb 7 2023 libssl1.1_1.1.1n-0+deb11u4_amd64.deb 1.5M May 29 2023 libssl1.1_1.1.1n-0+deb11u5_amd64.deb 368K Mar 21 2022 libsystemd0_247.3-7_amd64.deb 367K Aug 28 2022 libsystemd0_247.3-7+deb11u1_amd64.deb 368K Apr 7 2023 libsystemd0_247.3-7+deb11u2_amd64.deb 54K Nov 26 2022 libtasn1-6_4.16.0-2+deb11u1_amd64.deb 333K Feb 23 2023 libtinfo6_6.2+20201114-2+deb11u1_amd64.debfigure — example portion of archive package files.
The
sudo apt clean
orsudo apt-get clean
command clears the /var/cache/apt/archives/ directory of retrieved package files, except for the lock file.OR
sudo apt-get autoclean
Like clean, autoclean clears out the local repository of retrieved package files. but only of the package files that can no longer be downloaded. This lets you keep packages to roll back to, or otherwise use offline.
- Image Thumbnails
Your system (largely the file managers) creates and stores thumbnail (small-sized) versions of photos, album covers and other images.
These thumbnails greatly speed up opening the file managers, but as you can imagine, the number, and total file size of these thumbnails can accumulate quickly.
The folder that these are stored in ~/.cache/thumbnails and you can safely remove them to save space. The thumbnails will be recreated, as needed.
$ du -ch ~/.cache/thumbnails 41M /home/user/.cache/thumbnails/large 20M /home/user/.cache/thumbnails/normal 180K /home/user/.cache/thumbnails/fail/mate-thumbnail-factory 88K /home/user/.cache/thumbnails/fail/gnome-thumbnail-factory 272K /home/user/.cache/thumbnails/fail 61M /home/user/.cache/thumbnails 61M totalAs you may guess, start with /large. The most disk space will be used by this thumbnail subfolder.
- jdupes
jdupes is a forked and much faster version of fdupes, a utility to find duplicate files on your system. jdupes is up to 7x faster than fdupes. Both of these utilities allow you to delete any duplicates; or you can redirect the results into a text file, so that you can more selectively choose what to delete. For more jdupes info, run:
apt show jdupes
To use jdupes:
sudo apt install jdupes jdupes [first folder or drive to check] [opt. additional folder or drive to check] [etc]For example:
# First, list out the connected drives. sudo df -h Filesystem Size Used Avail Use% Mounted on /dev/sdc1 4.6T 4.5T 17G 100% /media/user/Current Work /dev/sdd2 4.6T 4.5T 104G 98% /media/user/One Touch /dev/sdb1 1.9T 1.8T 55G 98% /media/user/Seagate Backup Plus Drive# Now run jdupes to find matching files. # This will look for duplicates across /dev/sdc1 (.)and /dev/sdd2 # -S was added to show the size of duplicate files. # -r --recurse to look in every sub-directory.jdupes -S -r /m/b/C* /m/b/O*
852882 bytes each: ./IMG_20110310_205619.jpg ./NIKON-Pictures (back-up)/DCIM/Camera/IMG_20110310_205619.jpg ./NIKON-Pictures (back-up)/2015/backups/IMG_20110310_205619.jpg
3243133 bytes each: ./2024/06/IMG_20240612_125030915.jpg ./2024/Moto Photos/IMG_20240612_125030915.jpg
3797570 bytes each: ./2024/09/IMG_20240326_190436955_HDR.jpg ./2024/Moto Photos/IMG_20240326_190436955_HDR.jpgFinally, pick the files from those shown to delete using rm.
Sadly, while you can sort by modified date or filename, jdupes does not let you sort by Size. We will add sorting to jdupes in an upcoming post. (edit. see: jdupes.sh)
- systemd journal entries
systemd can store GBs of past binary journal entries in /var/log/journal/. These files are used by admins to figure out system issues. You generally only need to keep the most recent ones around. The rest can be removed.
To see how much space these journal files are currently using, run:
sudo journalctl --disk-usage Archived and active journals take up 213.9M in the file system.ordu -sh /var/log/journal 215M /var/log/journalTo cut down the number of journal files, run these steps.
- mark the currently active journal logs as "archived" and create brand new logs.
sudo journalctl --rotate
- This will keep just the last 3 weeks (3w) of journal entries.
sudo journalctl --vacuum-time=3w
You can vary the number and the length-of-time of files to keep. e.g. 2w saves 2 weeks worth of journal files, or 2d (2 days) worth, or 30m (30 minutes) worth.
Going forward, you can alter systemd's journal configuration to automatically keep a smaller number of journal files.
To do so: Make a backup of the journal config and edit the file.
sudo cp /etc/systemd/journald.conf /etc/systemd/journald.conf.backsudo nano /etc/systemd/journald.conf
Uncomment (remove the '#') from one of these two settings. We recommend SystemMaxUse=500M to always retain the most recent 500MB of journal entries.
[Journal] Setting Description
\# SystemMaxUse Max disk space logs can take
\# SystemMaxFiles Max number of log filesSave the config file and restart journaling to reload the changed config:
sudo systemctl restart systemd-journaldYou can also change a parameter in the config file from the command line, with sed.sudo sed -i 's/#SystemMaxUse=100M/SystemMaxUse=500M/g' /etc/systemd/journald.conf
For more info on journal files, see https://linuxhandbook.com/clear-systemd-journal-logs/.
- mark the currently active journal logs as "archived" and create brand new logs.
- tune2fs
If you are working with a large hard drive, this one will probably get back the most space of anything in this article.
By default, Linux drives use the EXT file system. Also by default, EXT reserves 5% of the disk space to itself. This was done to give administrators some room to work if a system ran out of space. The amount 5% was chosen back when hard drives were very small but today, with TB-size drives, 5% represents a large chunk of space. For example, 5% of a 5TB drive is around 250GB.
BEFORE tune2fs
/dev/sdb1 4.6T 28K 4.6T 1% /media/user/5TB
AFTER tune2fs
/dev/sdb1 4.6T 28K 4.3T 1% /media/user/5TB
First, get the list of partitions:
df -h
Next, use tune2fs with grep to see if blocks are reserved on your EXT drives. Replace /dev/sdb2 below with your device.
sudo tune2fs -l /dev/sdb2 | grep 'Reserved'
Reserved block count: 0
Reserved GDT blocks: 1024
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)Finally, run tun2fs and set the number of reserved blocks to 0.
sudo tune2fs -m 0 /dev/sdb2
tune2fs 1.46.2 (28-Feb-2021)
Setting reserved blocks count to 0
- Lastly, and perhaps obviously, don't forget to empty the trash. You can sometimes accumulate GBs of deleted files in there.
reference links:
* du file space
* df report file space
* sed stream editor
* rm remove files or directories
* GNU Core Utilities
* GNU Software
See also, Recovering Space Recovering Space, Part 2 (this file) Recovering Space, Part 3: jdupes.sh
...and many other LibreOffice books!
The excellent team at LibreOffice has released a guide for version 24.8. There are also similar guides for the component applications, like Draw, Calc, Writer, Impress — and even more specific books, like a tutorial for custom shapes.
As always, if you can afford to do so, please donate to help to keep this kind of work alive.
using the Uncomplicated Firewall
UFW
To install UFW type
ctrl+alt t
to open a terminal and sudo apt-get install ufw
to install it.
UFW is very easy to use. There are plenty of sites that cover using UFW (see linode and marksei); but we just wanted to point out a couple of important tips.
First let's look at a very simple UFW list.
$ sudo ufw status numbered
will list your current allows & blocks, preceded by the numbered order, like this:
We'll go over this in more detail in a bit, but the basics of each line is:
[The line number]; the affected IP range/port; the action to take; the ip/port range to watch for; and an optional # comment.
The DENY IN actions deal with what can come into your machine; while the DENY OUT actions control what can get from your machine, back out. The whole thing is quite powerful; and easy to use; but there are some things to watch out for. Here are a few of them.
TIP 1: The Order Of The Rules Matters
The ORDER that you enter UFW rules means everything. When the rules are processed, the first match from the top with the incoming (or outgoing) IP request is the one that will be used. Subsequent rules that might match are then ignored. So you want to be careful not to open things up too much in the first few rules.
Let's take a look at this and see how the order of these rules affects things.
If (like a lot of people) you start by allowing HTTP access, which is through port 80, you would enter:
$ sudo apt-get install ufw
$ sudo ufw enable
$ sudo ufw allow 80
# This will also add a second record for IPv6 port 80 as well.
and then start to deny IPs if need be:
$ sudo ufw deny from 123.123.123.123 to any
finally, list out what you have done so far:
$ sudo ufw status numbered
The 123.123.123.123 block is added to the end of your (IPv4) ufw rules as shown above. Subsequent incoming port 80 (http) requests from 123.123.123.123 will actually be *allowed* because the IP is first tested against the "allow any 80" rule and it gets through at that point.
To correct this, insert new individual ip blocks BEFORE the global allow and denys. So, to correct the list above, type:
$ sudo ufw delete 2
$ sudo ufw insert 1 deny from 123.123.123.123 to any
$ sudo ufw status numbered
Which results in:
Now when the IP 123.123.123.123 tries to access your machine it will hit rule [ 1] and be blocked BEFORE the global [ 2] ALLOW 80 rule is hit.
You can use this delete, insert sequence to nicely order the IPs that you allow or block; making it easy to see what you have when scanning the list.
TIP 2: Comments
You can comment your ufw rules, making it easy to know why you did things later on.
$ sudo ufw insert 1 deny from 123.123.123.122 to any comment ".cn, China"
$ sudo ufw status numbered
This is handy if you are unsure about an IP, and want to revisit it later.
Adding `comment ".cn, recheck after 2023-10"` is one way to mark it as such.
TIP 3: Blocking Ranges
You can block ranges of IPs with ufw.
This is useful if you start to recognize a pattern in say spam, coming from 123.123.123.1 and .2 and .3 and .4. It is likely that many of the IPs in the range of 123.133.123.0 - 123.123.123.255 will also be spammers. If that range happens to be in a country (see tip 5) whose language you do not speak (or sell to); then it starts to be OK to block that whole .0 - .255 range.
To block any IP in the range of 123.123.123.0 to 123.123.123.255 (aka 123.123.123.\*) use:
$ sudo ufw insert 1 deny from 123.123.123.0/24 to any
Similarly, to block:
123.123.\*.\* use:
$ sudo ufw insert 1 deny from 123.123.123.0/16 to any
For 123.\*.\*.\* use:
$ sudo ufw insert 1 deny from 123.123.123.0/8 to any
Note: Be very careful with these, because you are blocking a wide range of ip addresses with them.
TIP 4: Calculating Ranges
See this post for calculating the ranges to block.
TIP 5: Finding Where An IP Is From
geoiplookup (
sudo apt-get install geoiplookup
) is used to show you the country that an IP is from.
For example:
$ geoiplookup 66.45.225.196
returns:
Also helpful is host which will show the domain name associated to an IP:
TIP 6: Yes Virginia, There Is A GUI Version
Finally, even though the command line ufw command is powerful, easy to use, and can be scripted to do even more advanced things; there is also a GUI version that you can try, called gufw (
$ sudo apt-get install gufw
).
gufw
There are some nice features in gufw, like the ability to add rules per application — which is shown above. The "Advanced" tab is where you would put in a IP; or a range of IPs; while the "Simple" tab allows rules for ports or named services, like "http".
or, Finding cool new things, for free
If you are looking for new ways to explore more about Linux, its tools, and applications; try this...
Spelunking spi-ˈləŋ-kiŋ is the hobby or practice of exploring caves. To spelunk, you just head into a cave and look around to see what's there.
You can experience some most of the fun of cave-spelunking right from your chair (hence, seated-spelunking©) but with none of the falling, broken legs, cuts, bruises, getting lost or the occasional brown bear that comes with caves.
It is true that our way (probably) will not result in finding cool stones, human remains or pirate treasure; but it's probably even better. What we will be spelunking through will be the vast trove of Linux tools, applications and information that is available for free to you, right now.
apt-cache
There are a number of ways to do this, including GUI ones like the package manager that came with your Linux.
We will use a command-line tool apt-cache because we think that you will find this approach the fastest and most flexible way to search.
Press ctrl-alt t
to open a terminal; then search the applications cache for something that you might be interested in. This will return a pile of results, including many things that you may not have known existed.
Here is an example.
Search for the word 'game'.
$ apt-cache search game
This results in:
Actually, the word 'game' was a bad example, because so many results are returned. There are over 1,436 items associated with 'game' and we're only showing the ones starting with 'w' above.
If you see something that you are interested in, you can install it for free.
Lets install wesnoth, a turn-based strategy game, that is sort of like 'World of Warcraft'.
wesnoth
To do so type:
$ sudo apt-get install wesnoth
apt will gather any supporting packages that are needed and install them as well.
For example, installing wesnoth also installs the following packages.
Don't worry, if you want to get rid of wesnoth and all of its supporting packages, this will do so:
$ apt-get remove wesnoth
If you are getting too many results in your searches, you can add things like a sort to your command. This line will sort all of the results by passing the search results through a filter (that's the '|' character) to the linux command sort.
$ apt-cache search game | sort
## So there you go
You are now off on your way to seated-spelunking.
What are some other interesting searches?
Try:
$ apt-cache search office
$ apt-cache search ide
$ apt-cache search editor
$ apt-cache search graphics
or whatever that you would like to.
And remember that unlike cave spelunking — you are unlikely to run into a brown bear doing this from your chair.
The 50 Biggest Data Breaches
As companies rush to move their data to The Cloud, and to be Serverless; their data is being aggregated into fewer and fewer organizations. What follows are some points to consider before doing so and an appeal to factor in the possible long-term costs of moving to the large cloud vendors.
First off, in no way will your data actually be serverless. "Severless" is simply a marketing phrase used by the cloud companies; rather, your data is now just on someone else's server -- and is no longer under your direct control.
In some cases you do not even control which other companies (lets call them "the partners" of the cloud company) will now have access to your data. As part of the contract terms you may have been required to accept having these partners help to manage your data.
Going "serverless" with the big cloud companies may save your company dollars in admin time and infrastructure today; but it is not unlike U.S. companies saving money by outsourcing manufacturing to China. You may save money today; but what is the cost in the long term, if those manufacturing plants are nationalized? Or even just shut down for an extended period by pandemic or war?
The same sorts of give-and-take applies to cloud computing. What will you do if your cloud vendor is taken offline by a hack? Or network outage? Or a government kill-switch?
Will it even be possible to decouple your code/data from the cloud approach if you ever need to?
But, the most important question is: What is the cost to your business if your customer data is compromised en masse?
Either now, or in the not-to-distant future, when some distraught cloud employee decides to leak data; or a hacker defeats the cloud's security; or just if a government decides they want to see it that day; your data will be compromised.
The tricks used to defeat cloud security are becoming more advanced. For one example, consider "Cloud Squatting":
Yes, something like Cloud Squatting can be used against a server in your own facility. But when placing data into the large cloud vendor servers; many economies of scale are presented to the hackers. If they learn to defeat cloud security to get at one companies' data; they largely have done so for many other companies data at the same time.
Similarly, if the cloud vendor itself experiences a system wide problem (e.g. Amazon AWS outage) then many companies are hit at the same time. Carrying this same issue forward to the trend of "canceling" companies -- if a government/entity/group decides to ever block, say, AWS, they can do so much more effectively than if the same government/entity/group had to block tens of thousands of individual servers.
Just to show that data is being breached; and breached a lot, consider this graphic of the 50 largest data breaches so far:
source: ZeroHedge
The companies on the list include some of the largest ones, like Microsoft (250M records breached), Facebook (419M) and Yahoo (3.0B). Yes, that was B as in BILLION. Heck, Yahoo even makes the biggest list twice, with another breach of 500M on there. Ditto with Microsoft ... as they own LinkedIn.
Remember, these are only the 50 largest breaches. There are thousands more.
And many of the same companies shown have been breached other times as well.
The point is, big companies are not safe from this.
Hiding Breaches
Another issue is that large companies sometimes hide the fact that a breach occurred. Perhaps they want to save the hit on stock price that results. Or they may be looking for time to close the hole that was breached, before announcing it.
The real problem here is that while the cloud company twiddles around hiding the breach, the hackers are not waiting for them to disclose it before exploiting your customer data. The damage to the customers is much worse if a company tries to hide a breach. Yet they still do.
Our Recommendation
Force consideration of the long term, when considering a move to the cloud. Include the long term in disaster planning as well. Consider the possible future costs of having to re-buy servers (and administrators) in the cost equations.
And just like your home or office desktop computer, have a backup. Maybe ... like those good old in-house servers that got you to this point. Test the in-house backups periodically!
Don't forget to evaluate how (and how much it would cost) to pull your cloud data and all of your code that will be wired into cloud functions back in-house, if you ever need to. These costs go into the disaster plan too.
mkdir and cd in one easy step
So ... if you do a lot of command line work, you know that it is a drag to mkdir (create) a new folder and then have to separately cd into it. Especially if you do this a lot.
Here's a 3 letter command to do both.
First, let's get this up and working; and then we'll go over some of the parts for anyone that is new to this.
Enter the below mcd (make a folder and then cd into it) function into your .bashrc file.
mcd ()
{
if [[ $# -eq 0 ]] ; then
echo "Enter the (path)directory to create!"
echo "usage: "
echo " mcd files"
echo " mcd work/files"
echo " ...or for folders with spaces, add quotes:"
echo " mcd 'work/new files'"
echo " mcd "work/new files""
# exit 0
fi;
# -n checks if the folder is already there.
# If not, the directory (with parents) is created.
# Either way it then cds into the desired folder.
[ -n "$1" ] && mkdir -p "$*" && cd "$1";
echo "current directory: " $(pwd)
}
Then $ source .bashrc
to implement it (or just open another terminal).
Now you can type $ mcd your-desired-foldername
and it will create the folder your-desired-foldername, after first checking if it needs to do so. That's what this part [ -n "$1" ] &&
does.
It continues on to cd into the folder. Finally it will show you the path that you are in; just to confirm that it all worked. That's this part echo "current directory: " $(pwd)
.
By adding the -p parameter to mkdir, mcd handles creating all folders in the path provided; building any required parent folders along the way.
The if [[ $# -eq 0 ]] ;
portion of code near the top of mcd is checking to make sure a folder name was given as a mcd parameter. It will print out usage instructions if nothing was provided.
If no parameter is passed in, this will be the result:
One usage note.
Like mkdir, mcd wants to have quotes ( ' or " ) around folder names containing spaces. Otherwise they'll try to create separate folders for each portion of the space-separated name.
So use:
$ mcd "my work files"
rather than
$ mcd my work files
if your folder names contain spaces.
fonts, fonts and more fonts.
It is not hard to see why Microsoft has been "all-in" on trying to get control of Linux recently.
Little known gems like fc-match are few and far between in the buggy operating system that Microsoft actually charges money for.
fc-match (a.free.utility.) tells you information about fonts.
If you simply type fc-match in a terminal, it reports the font in use AND the fall-back fonts to use if the main one has a problem later.
$ fc-match serif will tell you the font used for serif,
and fc-match -s serif will tell you fallback fonts,
in order of best match.
Want to find a thin, sans-serif font on your system?
Try a related utility, fc-list:
$ fc-list | grep -i sans | grep -i thin | wc
which passes the full list of fonts to the first grep, which looks for anything named "sans". This is passed to the next grep to look for "thin" "sans" fonts; which is all passed to wc, a line counter. wc reports 127 thin sans fonts.
... and that's not all!
See also:
fc-list - List your available fonts
and the wealth of data in your local fontconfig user's guide (HTML):
file:///usr/share/doc/fontconfig/fontconfig-user.html
Don't pay for more than you need to!
When we reviewed the VPS market for recent customer installs, we were surprised to find that the major VPS review sites only turned up the high-priced VPS services. Even searches for "cheapest VPS" or "inexpensive VPS" still showed the same higher-cost VPS companies. So, we set off to find why this was.
It seems that this is due to a few industry tricks that even the VPS review sites fall for (spoiler alert: Don't worry, we have a couple of non-tricky, fairly-priced, VPS providers below).
The project that lead us down this path was when trying to find a suitable VPS for a new client in July 2021.
The desired specs for the VPS would:
• use KVM,
• allow the user to operate a GNU/Linux Debian instance, with full root access,
• be accessible via SSH,
• have at least 4GB RAM,
• have at least 50 GB of SSD storage,
• have at least 2 CPUs,
• come in around $15/month or less.
Does that cost seem shockingly low?
It may, if you have only read the big VPS review sites.
Good news follows, as we have two coming in under $15 below.
THE TRICKS
1. The click-bait price
The expensive providers often have a very inexpensive "first month" sale/deal/promotion price. It is JUST for the first month. And this is the price that a lot of review sites base their comparisons on. We actually recommend that you completely ignore this first month price and focus on the total cost for comparisons.
2. But only if bought in bulk
Once you get past that month-one price, they may show a fairly decent ongoing price. But make sure that you look closely. Often the attractive ongoing price is only if you order a year, two years, three, or as you will see below sometimes even four(!) years at once.
And these are often presented in reverse order. The FIRST (and lowest) price that you see will be the price for a three year commitment. It's just all very tricky.
3. Weird, just-a-little-off, configurations
Often (and they will not customize one for you differently) various configuration options are available; but none of them are really what you want, until you get up around $20-$40/month. The idea here seems to be to edge you up into a more expensive plan, by leading with "almost there" plans.
As we searched for our desired configuration, there were indeed all kinds of configs available, but none that made sense at a reasonable price-point.
For example, one VPS vendor just doubled each of the 3 main components (CPUs, RAM, Disk Space) at each level.
Here is a similar type-of example from A2Hosting:
figure — a2hosting-1
Now, if you truly have have a dedicated CPU, then memory becomes the more important factor -- at least for a web server. It is more likely that you can run your (Linux) web server in 4GB of RAM and 2 CPUs than you are to run it in 2GB/2 CPUs -- but it was very hard to find 4GB/2CPU as an offered option, at a reasonable price.
Do you really need 4TB of data transfer?
And, do you really need 450GB of SSD?
There are much cheaper VPS options for strictly-data storage, if that is what you need.
Most websites (or 3 of them; or even 10) do not need 450GB of disk.
40GB is often just fine.
So, it looks like they are adding things that you do not need at each price level, until you are finally ratcheted up to a level that costs too much for what you actually need.
4. Re-up hell
Regardless of the opening deal, the re-up price (the price that you pay after the first deals expires) is automatically much higher.
figure — IONOS-1
Let's take a look at just one of these in some depth.
Let's start with the options for hostinger (June 2021).
Below (figure host-1) is the opening set of choices that the user is presented with.
We found what appears to be a "close" match to our requirements in the $15.95/mo option.
You will notice that there is no mention of term on the opening page.
This implies that a month is, well, $15.95...
figure — host-1
So far, so good... that is, until you click the $15.95 "deal".
It would actually be $49.95/mo on a month-to-month basis.
It is only $15.95/mo if you sign up for FOUR YEARS!
They wanted $765.60 to continue!!!
And it renews after 4 years at $32.76/mo, or roughly twice as much!
How this does not amount to fraud, is surprising.
figure — host-2
5. The expensive cancellation
Some vendors prefer to hit your wallet on the way out.
There were various clauses like special cancellation fees; or very high cancellation charges.
We certainly were not interested in any of these games ourselves and moved on.
But be sure to read the fine print yourself regarding cancelling (which even means, the period when your contract finishes).
But Look At What We Found!
Amazingly, these vendors rarely turn up.
1.) GreenCloud
https://greencloudvps.com/billing/aff.php?aff=3052&a=add&pid=596
$7 month.
And, that is $7/mo for one month or a full year for $60.
PROs:
KVM. 4 CPU, 8GB RAM, 60GB SSD, Direct access to a headless OS (which is Debian 10 Standard, in our case).
We had 3 websites up and running, fully SSL/TLS certified, software fire-walled, with a Postfix email server running, within two days of having the VPS provisioned. We only had to contact GreenCloud once, for the PTR record discussed below.
We asked to make sure that the VPS was not actually on any of the large spyware platforms from Google, Amazon or Microsoft. We were told that No, GreenCloud operates their own servers.
CONs:
- Billing Issue
We ordered the site online, in a painless process. However, after ordering, and after seeing the payment successfully arrive at our bank just fine, GreenCloud then held it up for unknown-to-us reasons. GreenCloud could not tell us why other than their "fraud system put it on hold". They wanted us to email a copy of our credit card -- which nearly cost them this rave review :-)
One does not email credit cards in this day and age of "the Hacker".
A ticket to support got everything rolling again.
Variable CPU Count
Unlike many other KVM VPS providers, the number of CPUs is a variable. It is "up to 4" -- You don't really get a hard and fast 4. You share the 4 with others. This is what we would expect from an OpenVNZ vendor; not a KVM one. Note: It appears the RAM is also "depending on need" up to 8GB. We haven't experienced a problem (yet); but this is worth noting anyway.Login credentials Credentials to log into your VPS as well as into your control dashboard are emailed out in plain text.
No, no, NO!
This is a poor idea in today's hackerville internet.
Make sure to change your passwords immediately after getting this email.AS with some other VPS providers, they won't let you set a PTR record to your IP right away. Ostensibly, the reason is to cut down on spammers; because one tool to defeat spammers is to check to see if a PTR record is present. Since the VPS provider is the actual S.O.A. (Start of Authority) the PTR record has to be created by them and not through your registrar (you can create one at the registrar yourself -- it just will not work.).
Make sure that you are getting the $7/mo deal from our provided affiliate link.
Their other prices are not as good.
Additional GreenCloud Services You can customize (YAY!) your VPS by adding RAM/CPU/Hard drive for the following prices:$3/1GB RAM/mo $3/CPU Core/mo $3/10GB SSD/mo
2.) InterServer.net
https://www.interserver.net/r/537557
This is another low-cost VPS provider that works pretty well.
PROs:
- Still comes in at $12/mo for an unmanaged (that means, you manage it), 4GB RAM, 50GB SSD with 2 CPUs.
- A (preferred) KVM VPS.
- Same headless Debian 10 standard with full root access.
- Same SSH access to the server.
- Their support is very good.
- Less of the 1-4 CPU, 1-8GB RAM stuff of GreenCloud. You get actual dedicated RAM and CPU slices.
CONs [relatively speaking]:
- They have some re-captcha issues (as on June 2021) making it hard to get around the site.
There is more detail about InterServer here.
The Net Result Of Our Search?
One year of GreenCloud VPS for $60.
3 running websites and a Postfix/Dovecot mail server, using around 0% (sic) of the available resources. So there is plenty of RAM and disk to grow into.
figure — Our System Use, via htop
figure — Available Space
Our customer is happy.
...updated its privacy policy to 'allow' collection of users' biometric data.
The video-sharing service TikTok (a Communist Chinese company) revised its U.S. privacy policy in the U.S., "allowing" them to collect biometric information such as faceprints and voiceprints from user content.
Per the new policy:
“We may collect biometric identifiers and biometric information as defined under U.S. laws, such as faceprints and voiceprints, from your User Content. Where required by law, we will seek any required permissions from you prior to any such collection,” (the Chinese company) ByteDance-owned company said in a newly introduced section called Image and Audio Information. The company may collect information about “the nature of the audio, and the text of the words spoken in your User Content” so as to “enable special video effects, for content moderation, for demographic classification, for content and ad recommendations, and for other non-personally-identifying operations.”Our first note is that "may collect" really means "will collect".
Given that TikTok originated in communist China, who is already using social media to control people, should give you some pause.
biometric security - Be very careful with your biometrics. Unlike a password that you can easily change if compromised, the same is absolutely not true with biometrics. If they get your voiceprint, fingerprint, (or soon) retinal eye scan … they have it for-ev-er.
Even if the company (or spy agency) getting your biometrics today is on the up-and-up; that does not help you if that company gets hacked and your biometrics find their way to the dark web ... forever. Or if the country behind the spy agency dramatically shifts their "goals" somewhere down the line and decides that you are now a target... You might want to hand over your biometrics very grudgingly; and tape over that fingerprint scanner on your laptop and phone.
This is not the first problem with TikTok.
They have been caught:
Deleting information,
censorship (like protecting UGHYAR organ harvesting),
and they ALREADY collect data like this: “In its privacy policy, TikTok lists that it collects usage information, IP addresses, a user's mobile carrier, unique device identifiers, keystroke patterns, and location data, among other data. Web developers Talal Haj Bakry and Tommy Mysk said that allowing videos and other content being shared by the app's users through HTTP puts the users' data privacy at risk.”
KEYSTROKE PATTERNS!
Isn't a password, a pattern?
Our recommendation: Drop TikTok
Killing Spyware
The very popular, cross-platform, sound processing tool Audacity has been purchased by a corporate identity that has altered the licensing, so that your personal information can be collected and pulled back to unknown places.
(more info)
Audacity is now a Possible Spyware, Remove it ASAP
Discussion: https://fosspost.org/audacity-fork-needs-help/#comment-66600
It is best just to remove audacity now and replace it with one of the alternatives below.
# to remove Audacity:
Then
$ sudo apt autoremove
to remove the leftover pieces colored in yellow above.
As noted in the discussion link above, forked (cloned) versions of Audacity are already being created to remove the spyware problem.
In addition, you can always check alternativeto.net for similar software.
Command line tips
This is just a short tip to break down this command line syntax,
home:~$ (jedit the_code.py & disown)
; in case you run into this type of thing.
So then, let's just see what all of the "(", ")", "&" and "disown" stuff is all about.
The main part of the above command is the loading of a file (the_code.py) into a graphical (GUI) editor (jedit).
home:~$ jedit the_code.py
As you get more experienced with Linux, this is definitely something that you will want to do, as it saves you from digging through menus just to quickly run a GUI app.
This works just fine and dandy -- but it "hooks" jedit to your terminal.
Your terminal will receive any text messages put out by jedit; and closing the terminal will also close jedit as well. Worse, you can not run anything else in the terminal until jedit is closed.
All of this is true regardless of what GUI app that you run in this way.
If we add "& disown" to the command:
home:~$ jedit the_code.py & disown
it does the same as above, except that jedit is "detached" from the terminal. You can then close the terminal by typing exit
or clicking/tapping the close button(s) at the top. jedit will continue running without the terminal.
If you do leave the terminal open; the terminal may still get a stream of messages from the disassociated application. This might be things like non-critical errors, warnings, or recommendations. So, the GUI app is still a little associated to the terminal, if only as a place to dump non-critical messages.
To most users, who might just be using the terminal to do other work; these messages can be annoying.
That brings us to wrapping the whole command in parenthesis, like so:
home:~$ (jedit the_code.py & disown)
This does everything the above commands do -- but also suppresses those diagnostic messages that may get dumped to the terminal from the GUI app.
ps.
A reminder that "&&" allows you to string together CLI commands.
So,this → home:~$ jedit the_code.py & disown && exit
Runs jedit, dissociates the running jedit from the terminal AND then closes the terminal. This leaves just jedit running and loaded with the file to edit.
In a similar fashion clear && jedit
first clears the terminal screen and then runs jedit.
Things to check when trying to free-up disk space
(at least from our point-of-view) Linux is better than Windows and MacOS in nearly every way.
That said, Linux does share one problem of those other operating systems.
Linux can run out of space too...
Using the following 97%† full SSD as an example, we will show a few places to look for disk space. We will add other suggestions over time and this post will become a sticky so that you can easily find it again.
1) Clean up the main log file back-ups:
As a sudo user, take a look in the /var/log folder.
You can remove:
*.1 files, which are the immediately prior versions of log files.
For example when the mail.log file [holds info from the mail system and info about individual sent & received emails] fills up; the file is rolled over to mail.log.1 and a new "mail.log" is created.
You'll see a lot of these /var/log/*.1 files; and they can be removed‡ .
*.gz (GNU Zip) files. In a rotation similar to what is detailed above, *.gz files are compressed versions of *.1 files. For example, if a mail.log.1 was already in existence above, it first would of been compressed to a mail.log.2.gz file. Only then would the mail.log be rolled over to mail.log.1.
Similarly, if there had been a pre-existing mail.log.2.gz file, it would of first been moved to mail.log.3.gz, and so on.
All of these log files and back-ups are there to provide an administrator a good deal of info to have on hand, in the case of a problem. But they do take up a lot of space.
You can also delete‡ all of the /var/log/*.gz files too.
2) Other email logs:
If you are running a POP3/IMAP email tools, like Dovecot, they can generate pretty big log files as well.
On our system above the dovecot.log was 549MB.
Back the file up and type # echo . > dovecot.log
to empty it out, while still retaining all other file permissions and ownership.
3) fail2ban's database:
If you are using fail2ban (and we think that you should be) to limit hacking, spamming, log-in, etc attacks; then monitoring the size of it's file should be on your check list.
/var/lib/fail2ban/fail2ban.sqlite3 3.1 GiB
A ".sqlite3" file is a database file. As as byproduct of cutting down or removing fail2ban.sqlite3, and recovering its' space, you will also find that subsequent $ ufw inserts ...
will run much faster; as will $ ufw status numbered
, etc.
"Much faster" in the case of this 3.1GB example meant "a few seconds" to run a UFW command, instead of a minute+. When you are trying to manually add a bunch of IPs to block, minutes instead of seconds really matters.
Since Fail2Ban works in part by catching evil IPs based on how quickly attacks come from the same IP; cutting down the database file size will allow Fail2Ban to process faster and thereby catch more too.
Restart Fail2Ban to recreate the .db file:
# sudo service fail2ban restart
If you completely remove the file; you will see it recreated in a second or so; as the next new attacks are caught by Fail2Ban.
4) Archive files:
Whenever you update Linux, new versions of applications, docs and tools most often arrive in the form of packages (.deb files).
The prior version of these packages are retained in the /var/cache/apt/archives folder in case you need to revert back — or simply for historical purposes, so that you know what has been installed in the past.
There are ways to automatically clear these out, but you can do so manually as well.
As you can see from this partial list of packages for just the Linux kernel, these files can really add up over time.
46.1 MiB [######### ] linux-image-4.19.0-16-amd64_4.19.181-1_amd64.deb 46.1 MiB [######### ] linux-image-4.19.0-14-amd64_4.19.171-2_amd64.deb 46.1 MiB [######### ] linux-image-4.19.0-13-amd64_4.19.160-2_amd64.deb 46.0 MiB [######### ] linux-image-4.19.0-12-amd64_4.19.152-1_amd64.deb 46.0 MiB [######### ] linux-image-4.19.0-11-amd64_4.19.146-1_amd64.deb 46.0 MiB [######### ] linux-image-4.19.0-10-amd64_4.19.132-1_amd64.deb 46.0 MiB [######### ] linux-image-4.19.0-9-amd64_4.19.118-2+deb10u1_amd64.deb 45.9 MiB [######### ] linux-image-4.19.0-9-amd64_4.19.118-2_amd64.deb 45.9 MiB [######### ] linux-image-4.19.0-8-amd64_4.19.98-1+deb10u1_amd64.deb 45.8 MiB [######### ] linux-image-4.19.0-6-amd64_4.19.67-2+deb10u2_amd64.deb 45.4 MiB [######### ] linux-image-4.19.0-5-amd64_4.19.37-5+deb10u2_amd64.deb 37.3 MiB [######## ] linux-image-4.9.0-8-amd64_4.9.130-2_amd64.deb 32.9 MiB [####### ] linux-image-3.16.0-7-amd64_3.16.59-1_amd64.deb 32.4 MiB [####### ] linux-image-3.16.0-4-amd64_3.16.51-3_amd64.deb
5) Spelunk!
Using QDirStat, ncdu or related tools, take a look at your home folder to see what sort of large, potentially deletable, files show up.
By default, ncdu will sort folder (and files) into largest to smallest order; which often makes the spelunking approach profitable as far as saving space goes.
Additional References
Scripts to reduce the Fail2Ban db (Advanced Users!)
https://gist.github.com/tschifftner/ac09d17e8878ec89d930387050b4224b
#!/usr/bin/env bash
# Tobias Schifftner, @tschiffnter
#
# Usage:
# bash fail2ban-cleanup.php <fail2ban.sqlite3>
FILE=${1:-"/var/lib/fail2ban/fail2ban.sqlite3"}
[ -f "$FILE" ] || { echo "$FILE not found"; exit 1; }
function sql() {
$(which sqlite3) "$FILE" "$@"
}
sql "DELETE FROM bans WHERE timeofban < strftime('%s', 'now', '-7 days');"
sql "VACUUM;"
This script flat out deletes the .db and restarts fail2ban.
https://gist.github.com/mitchellkrogza/bfcb5c14b4d9d2d2856f85f50b030186
† By the way, you should NEVER let a SSD get to 97% full. Try to maintain 20-25% of free space, so that the same physical area is not continually being overwritten -- which promotes SSD failures.
‡ As with ANY other changes to your system, we strongly recommend that you back up your system and these files, before happily deleting swaths of them. Trust us -- sooner or later, you will be very happy that you did.
An easy-to-use command line tool for checking your disk space
While QDirStat is a wonderful GUI tool for reviewing a drive's use of space, ncdu provides a similar thing in a textual context. This is particularly useful in a headless system setup, where there is no GUI; such as may be found on a server. In that type of situation, the terminal is your best friend; and that's where ncdu will work wonders.
But that's not to say that you will not jump to ncdu first even on your desktop ... the speed that ncdu runs at and its ease of use will make you remember it.
Like QDirStat, you can feed ncdu a folder to start with, when you first run it.
You can use the left and right arrow keys to move back and forth through previous folders and [enter] to drill down into a folder. And, you can delete highlighted files by pressing 'D'.
Visualizing your disk space
WinDirStat is a nice tool on Windows for visually seeing what's-what on your hard drives ... as far as space goes.
Fortunately a very similar tool exists for Linux, called QDirStat (Qt Directory Statistics).
QDirStat lets you view your disk either as a folder tree, or as a graphical representation. Or both ways at once. You can resize the top or the bottom to cover the full window.
QDirStat, Split Screen
If you click on a folder such as /usr, as shown, all of the files in that folder are highlighted in a box on the visual portion as well. The folder will be outlined in red, just as /usr is in the first QDirStat image above
Click through the L1, L2 & L3 layout buttons at the top to see varying levels of detail of any file displayed in the top textual representation.
Individual files are color coded in the visual view, for easier identification.
• PDFs and EPUBs (documents) are dark blue,
• MP4s and MKVs (videos) are light green,
• ZIP (compressed) files are dark green,
• music is yellow, images are light blue, and so on.
• Most importantly, junk files such as .BAK files, are light red.
You can change the assigned colors, if desired, under the Settings menu.
You can click on any file and have its detail shown; or open file manager or the terminal to that spot; or even delete the file altogether. This is really a very nice way to quickly find the folders and files that are chewing up the most space on your disk.
A useful menu item here is Up One Level, which then selects the whole folder that a file was in. It is really very easy to negotiate around the drive using QDirStat's controls.
QDirStat, 100% Visual
Tip: When you start qdirstat from the command line, you can feed it the desired folder to start with.
e.g. from a command line in the terminal, the following line will: Start qdirstat right on the /var/www folder; separate qdirstat from the terminal (& disown), otherwise, closing the terminal would also close qdirstat. The final piece (&& exit) adds ( && ) the exit command to close the terminal. qdirstat will be left running.
Similar Tools
GDMap (Graphical Disk Map) is similar to QDirStat but has far fewer features and detail. Critically, files can not be deleted using GDMap's interface.
K4DirStat, another similar tool, is functionally the same as QDirStat.
But on Debian, unless you are using the KDE desktop, installing the KDE K4DirStat tool will want to bring along ... a lot ... of extras.
(a lot:)
That said, you'll want to consider k4dirstat if you are already running KDE.
PinePhone: An inexpensive, (real) Linux phone
With all of the personal data collection and outright censoring being done on the Android and iOS phone platforms; a full LINUX phone that is actually affordable(!) now comes into play in the form of the PinePhone, from Pine64.
The PinePhone has been customized to run several interesting Linux variants, such as Manjaro running Plasma Mobile; Mobian (a Debian phone port); UBPorts (a user base port of UBUNTU Touch), postmarketOS (Alpine Linux) and KDE Plasma; with others on the way. By way of comparison, Android "uses" Linux, but it is only an older stub of the Linux kernel, with a lot of proprietary Google code lumped on top and around it. One example of this is the Google Play Store.
BUT BEST OF ALL there are switches to turn off front & back camera, the modem (cell/GPS), Wifi & Bluetooth, the microphone and even the headphone. These are physical hardware switches, not software ones that can be over-ridden ... by others.
You do not have to provide an email or other identifying information to use the PinePhone, as you would on Android and iOS.
It has a removable Li-Po 2750 mAh battery and charges with a USB type-C cable that is provided.
Cost is $149 USD, or $199 USD for a convergence kit, which is twice the memory and a (little) device to let you hook up a monitor and a mouse easily. This allows you to try the PinePhone out as a "desktop" as well.
For more info, see:
PinePhone.
Mobian.
UBports.
KDE Plasma.
Mobian PinePhone wiki.
A very quick way to recall past CLI commands
We all know that using ctrl-c/crtl-v to copy & paste; and ctrl-z to (blessedly) be able to reverse one or more screw-ups; were truly great advancements in computing.
Well, ctrl-r on Linux is mighty helpful as well.
Pressing ctrl-r while in the terminal brings up a little processor, that lets you enter any string. It then shows you the line that you last used with that string in it. This is pretty convenient for remembering the parameters that you last used with any given command.
For example, typing ctrl-r, then scp brings up a history search starting with your last 'scp' copy command. Hitting the left or right arrow key switches this into an edit mode, where you can move back and forth to alter the line. Then just press Enter to execute it.
What ctrl-r is doing is going back through your own history of terminal commands and displaying them to you in reverse order. If you press ctrl-r again while in the processor, it will show you the next previous time that you used that same string.
People who often use the same commands, but with different parameters, will come to love ctrl-r.
A slick, script to report on system resources
One of the first upgrades that a lot of users do after installing Linux; is to add the command line utility htop. htop presents the system resources in a little better way than the default top command does.
htop
Well, we would like to suggest that you also take a look at bpytop.
bpytop is a text python script, that you can just copy & paste directly to your system.
bpytop almost has the feel of a compiled, packaged application, rather than "just" being a script.
bpytop allows you to see the CPU utilization (by internal CPU); overall memory usage; which applications are using the memory; all running applications; and even things like the temperature of each cpu.
You can alter the things shown by clicking "Menu" at the top.
Beyond being pretty cool; bpytop is also an excellent resource for learning python scripting. Simply open byptop in gedit, or your favorite text editor, to see how it all works.
Installation:
You can find bpytop here:
https://github.com/aristocratos/bpytop
There are full install instructions there,
but you can also just copy/paste the script from here:
https://github.com/aristocratos/bpytop/blob/master/bpytop.py
Open gedit and paste in the script:
$ gedit what_you_call_the_script
Make the script executable:
$ chmod 744 what_you_call_the_script
bytop works on OSX, FreeBSD and Linux.
bpytop will install right out of the
debian distro in version 11 (which will be soon).
Then you will be able to simply run this to install it:
$ sudo apt-get install bpytop
Virtual Private Servers
There are many options when deciding where to host your website. These range from hosting at the registrar (where you got your domain name from); to hosting on a dedicated machine (hardware that you buy/rent); to putting your site on Amazon (not recommended); and a wide variety of other types of choices (like running everything from your home desktop).
We have been on most of these options in the past ourselves.
But after undertaking a recent review of the best price/capability choices out there, we now recommend a VPS from InterServer.net.
As the name implies, a VPS is a Virtual Server which is portioned from an actual physical server. There can be many VPS servers within a single physical server. [This is how the VPS vendors make their money].
The CPU, memory, network bandwidth and physical disk space of the physical server can all be allotted to VPSes in different ways. They may have different disk sizes; differing amounts of RAM (memory); different network bandwidths and even different amounts of available CPU.
Or the individual VPS configurations within a physical machine can all be the same -- the approach varies between VPS vendors.
But beyond the configurability, the greatest aspect of a VPS is that it emulates an actual stand-alone machine. You actually get root (admin) access in Linux and can install any applications that you want. You can apt-get update/upgrade your operating system as much as you'd like to; and even upgrade your entire distribution from version to version.
(Don't worry if any of the buzzwords above are unclear, we'll help you with all of it.)
InterServer also has Windows Server VPSes as well.
There are three reasons that we recommend InterServer.
Flexibility. You can pick from a wide range of options.
Cost. They have the best prices that we have found (not counting way-offshore options, or similar things that are a bit murky).
Support. Interserver has been wonderful to work with; and helps very quickly with any technical details to get you started. In fact, when we wanted to praise their tech support to InterServer management, it was tough to single anyone out. They were All good.
As a current price comparison, we moved one of our Windows customers from an ISP, who runs dedicated and hosted solutions.
The price of the Windows solution had been $600/year (and was going to go UP! by a few hundred dollars in 2021).
We moved this client to an InterServer Linux VPS that now costs $120/year. Not only that, but with Linux, future upgrades to newer full versions of the operating system are ~free; and not hundreds of additional dollars, as it would of been with Windows.
That said -- if the customer would of wanted to stick with Windows; InterServer offered that as well, for a little more.
If you try them out, please use our code (here). It costs you nothing more to do so; and we get a little bit back if you stay with them for a while. This kind of thing helps to keep sites without ads, like ours, afloat.
If you try them out on your own, please let us know what you think.
The ntfslabel command.
This is just a quick addition to our sharing external drives with Windows machines post.
To change the label of the drive that is shown in places like df and in the file manger, you would use ntfslabel.
Steps:
1. Using the df command, get the exact location of the partion that you want to relabel.
In our case, we are after the "Seagate Backup Plus Drive" partion; which is /dev/sdc2 (NOT /dev/sdc ... but /dev/sdc2).
2. Unmount the drive.
Click on the red triangle next to the name in file manager or # umount /dev/sdc
ntfslabel only works if the drive is unmounted. If you Do try and run ntfslabel on a mounted drive, ntfslabel will exit telling you to unmount it.
3. Then ntfslabel the drive and remount it. We will label the drive as "5TB-Red.Copy".
And now a df -h returns:
a CIDR Tool
So, let's say that you are a new Linux admin (or you just want to block hacking attacks on your desktop).
And you just have found a contiguous range of ip addresses that are hitting your server; all trying the standard bevy of hacks and attacks on your website...
...and you would just like to block the whole range.
It is sometimes a pain to remember the right CIDR netmask numbers to add after the starting ip address to block a range.
Fortunately neustar has a nice tool to help with this calculation.
Just type in the starting ip (in our example, "149.20.4.15"), followed by anything from /32 to /1, to see what all would be blocked by your chosen range.
- /32 blocks just that ip.
- /31 blocks 2 ips. The starting ip and the next one.
- /30 blocks 4 ips.
- /29 blocks 8, and so on to
- /1 which blocks 256 bits of the ip or around 2B ip addresses.
You may notice that whole 1, 2, 4, 8 doubling-thing that is going on there. Each lower number doubles the amount of ips that would be blocked by the prior number.
/24 is commonly used, as it blocks the lower 256 ip numbers; so 149.20.4.0/24 would block the ip range 149.20.4.0 to 149.20.4.255.
Tip: You don't want to use /1, as that can be a very wide range of ips. For example, 149.20.4.15/1 would block 2,147,483,648 ips, or about ½ of the IPv4 internet.
See: https://www.ultratools.com/tools/netMask
Note: The example IP used is from Debian. Don't block Debian :)
Command line tricks for users
For better or worse, if using Linux, you are likely to spend some time using the CLI (command line interface). However, once you start to do so, you might actually like it -- because the tools there are so powerful.
This post will be a collection spot for some simple, getting started, CLI tricks.
We'll try and organize these in a simplest to more advanced order.
1. Click the menu icon, Administration, then Terminal,
2. or press Ctrl+Alt+T,
3. or press Alt-F2 and type "Terminal",
4. or press the menu icon and type "Terminal" then enter.
Clicking to the Terminal
First, off: Returning to home
While in the CLI prompt, you will be situated in a folder (aka a directory). From there, you manuever around to other folders using the cd (change directory) command.
By default, when you open the terminal to get the command prompt, you will be placed into your home folder/directory. So, it will be something like fred@WORK5:~$ (but instead it will show [your user name]@[your machine name]:~$ ).
What all of that just means is that you are placed in your home folder when you open the terminal, and your username is fred. As in:
/home/fred
The little tilde (~) character? That means "home" and is a shortcut for your home folder. When you see it in the prompt that signifies that you are in the home folder.
So, if you wanted to move to your documents folder, which is here (/home/fred/Documents) you would type $ cd Documents
Advanced folks! Just three more basics, please.
(We're also setting up the folder names to be discussed as well).
Lets assume that you are in this folder:
/home/fred/Downloads/work to be done/spreadsheets for 2020
There are two 'operators' for relative paths. '.' and '..'.
Here is how those work.
$ nemo .
.
This will jump you right out into the GUI file manager. You can also press the "Window" key at any time, to pop-up the GUI menu, and to be able to escape that way.
Of course, typing
$ exit
to leave the terminal will work as well.
$ cd ../
moves you Up a folder.
In relative terms, this is to the 'parent' of where you started.
('.' is the folder that you started from).
So, BOTH of the following commands will move you up to the folder
/home/fred/Downloads/work to be done/
cd "/home/fred/Downloads/work to be done/"
and
cd ../
For the first one, you have to enclose the string in " " otherwise Linux (as well as Windows) does not know how to handle the spaces in the name.
We'll show you a trick for dealing with the spaces in a bit.
pwd
By default, you are shown the bottom-most folder in your path on the prompt.
So if you are in /home/fred/Downloads, your prompt will show fred@WORK:Downloads$
Something that we'll address in another blog post is how to configure the prompt the way that you like, including colors and how to show more, or less, path information.
But anyway, the pwd (print working directory) command shows you the full path to where you are at.
$ pwd
returns
/home/fred/Downloads
cd ~ and cd -
These are two special parameters to cd.
The first returns you to your home directory from wherever you are. Just cd ~
and you're home.
The second takes you to the folder that you were last at.
So, if you were in /home/fred/Downloads/work to be done/spreadsheets for 2020
and typed either
cd ../../
or
cd /home/fred/Downloads
to get in to /home/fred/Downloads ...
...then cd -
will pop you right back into
/home/fred/Downloads/work to be done/spreadsheets for 2020
Note: You will remember that '~' means the home folder.
"~" can also be used relatively.
So cd ~/Downloads
will also put you into
/home/fred/Downloads
Secondly: Cutting down typing
Obviously, typing out
$ cd /home/fred/Downloads/work to be done/spreadsheets for 2020
is cumbersome.
But you can use the '*' wildcard character to ease the work.
So...
cd "/home/fred/Downloads/work to be done/s*"
also gets you to:
/home/fred/Downloads/work to be done/spreadsheets for 2020
cd "/home/fred/Downloads/w/s"
will also get you there.
So will cd "/home/fred/D/w/s*"
as will cd ~/D/w/s*
This is tremendously useful if you know right where you want to be.
One slight caution: If for example, there are two 'D' files in the Downloads folder you'll get a
"bash: cd: too many arguments" error; because it does not know which of the D items to pick.
But, that's ok, just add another character or two until you have a unique name, and press enter again, to go right to where you want.
Describing what 'free' means to us.
So, we have told you about mostly free software; but what does that really mean for your budget?
"Free software" can mean different things (see the sidebar What is "free" software?).
For our customers we promote truly free-to-use software and applications.
So ... why does La Vojo show fees with some of the free software replacements?
What we are trying to do here is to provide back some support to the creators making the free software. These people live on contributions. The amount that they get determines how much and how frequently that their software will be updated.
We think that is is fair to help them, if we are using their tools to save our customers a lot of money.
These fees are admittedly arbitrary, so we would be pleased to work with you to jiggle them around -- with a goal of being fair to "the ones that brought us to the party" as well as to you, our customers.
In this context there are two types of "free".
1. The ability to see the source code of the tool or application. The vendor may charge for this software, with the main benefit being -- that you actually know what it is that you are getting. There are no hidden backdoors; malware; or other tricks buried within a lump of compiled binary software that you would get from, say, Microsoft.
2. Same as the above, but at no cost.
There are occasionally some other twists, like how the software is licensed to you (the user).
It can be licensed in a way that allows you to use the software, but if you make any improvements to the code of the software, you are supposed to offer those changes back to the originators, often for free.
First, you started with something that took a great deal of time, effort and money to create. Offering back your changes is a very fair return.
Secondly, it is likely that you will still use the bulk of the original each time it changes. If you DID NOT offer back your changes, you would have to reimplement the changes each time the base changed. If you DID offer your changes back and the originator uses them; then each new update of the original would already have your stuff in it.
For a real life example of this, the Linux Mint Debian operating system is a version of Linux Mint, built on top of the upstream, originator Debian.
Or, like the BSD operating system, it can be licensed in a way that you can do whatever you please with it -- changing it; bundling it with your own software; or whatever you would like.
Our goal is to save you money and to promote open source software.
So we will be finding the very best, while least expensive item for you in all cases.
Here is a Very detailed, interactive kernel map.
This one is for advanced users. It is a highly detailed, interactive map of the functionality in the Linux kernel, by subsystem. If you double-click on an area of interest, you are shown a page breaking down that subsystem.
This one should keep anyone interested in the kernel, busy for a while.
Click the image below to try it out.
Stitching images together with ImageMagick's convert.
The command-line convert command, which is part of the very powerful ImageMagick image manipulation package, provides a very simple way to stitch together images. This can be done either vertically, using the -append parameter, or horizontally, using the +append parameter.
This can even provide a way to stitch together landscape photographs, to make one panoramic shot.
convert also works on more than two images. The examples below show 3 images pasted together to create the result-sprite png file.
$ convert image1.png image2.png image3.png -append result/result-sprite.png
# Horizontally
# Example (horizontal sprite):
$ convert image1.png image2.png image3.png +append result/result-sprite.png
So for example, these two images ...
... can be pasted together into the below one using:
A list on python.org for area developers
This is a mailing list for Bedford County, VA and Lynchburg, VA area python developers!
The list is to share tips; for python news; to get help for projects; or to offer your help on projects ... both those that are paid and those that are not.
Python is a great language.
We all know that here.
To subscribe you can send an email with 'subscribe' in the subject to lynchburg-va-join@python.org.
...tips to ease the move to Linux
These are just some small tips when you're first moving to Linux from Windows/Mac. These are things that you are likely to use; so it may be helpful to have them all in one place.
Item | on Windows | on Linux | |
---|---|---|---|
Naming files. | Names are case insensitive | Case Counts!1 These are 3 different files: xyz.txt, XYZ.txt, Xyz.txt |
|
Make a folder in the command prompt. | c:\>md | $ mkdir | |
Start a program/.bat and return to the prompt. | c:\>start [program] | $ [program] & disown This will continue to send status messages to calling window. To suppress these messages (which can be useful), wrap the previous command in parenthesis. e.g. ( [program] & disown ) |
|
List a text file out. | c:\>type xyz.txt | more | $ cat xyz.txt | less (less allows you to move through the file with the up/dn arrow keys) |
|
Copy a file. | c:\ > copy | $ cp | |
Remove a file. | c:\ > delete | $ rm | |
ASK before overwriting or deleting. | $ cp -i $ rm -i (Here is how to do this "globally") |
||
Check the version of most CLI2 commands. | $ [program] --version | ||
Get quick help on using most CLI commands. | $ [program] --help | ||
Get in-depth help on most programs. | $ man [program] | ||
rm'd files are potentially recoverable. For greater security, see shred. |
see $ shred -- help | less | ||
If you want Hillary Clinton-level hiding, see Bleachbit. | $ sudo apt-get install bleachbit |
To find your current OS version use lsb_release:
To find your current kernel version, use uname:
Some other notes.
Although there are others, the most common Linux version of a Windows DOS .BAT file is a #Bash shell script. Unlike .BAT files, it is not the file extension that makes a bash script a bash script. It is the fact that it is executable and contains #!/bin/bash as the first line. That said, it is a common practice to name bash scripts with a .sh extension; so that you know what they are. #debian-ish distributions will also try to execute a .sh file as a bash script by default, even without the starting line of #!/bin/bash.
1 Case in Linux most often counts.
So for example, these "a"s are 2 different parameters:
$ tree -a # All files are listed.
$ tree -A # Print ANSI lines graphic indentation lines.
Comments in bash scripts (and python, and config files, and other places) are preceded with a "#" character.
2 CLI is Command Line Interface, which would be the command prompt/DOS on Windows.
...tips to ease the move to Linux
This is a group of mostly .mp3 podcasts on various aspects about Linux and open source. Several of the RSS feeds just list out the clickable .mp3s and that's great for something to play in the background while converting applications to their FOSS equivalents.
Hacker Public Radio | Linux and other info | ||
The LinuxLink TechShow | Chatting about Linux and things tech. | ||
https://linuxactionnews.com/ https://www.jupiterbroadcasting.com/tag/linux-action-show/ | |||
https://www.jupiterbroadcasting.com/show/linuxun/ | |||
TalkPython | https://talkpython.fm/episodes/show/211/classic-cs-problems-in-python | ||
List of Python Podcasts | https://www.fullstackpython.com/best-python-podcasts.html |
Now available for code ports and open source conversions in VA.
We are now helping companies in Lynchburg and the Bedford County, Virginia area with
- code ports (ASP/ASPX/.NET/VB→to→Python),
- scripting ports: BAT→BASH scripts,
- database ports (Access→MariaDB; SQL Server→PostgreSQL, SQLite3 work),
- operating system ports (primarily Windows to Linux) and
- package installs (for example, Office 365 to LibreOffice).
Please let us know if we can help to save you money!
Lynchburg, VA
By default, bash will show a date and time in this format:
Sat Jun 22 22:04:22 EDT 2019
To see the date in 12-hour format, like this:
Sat, Jun 22, 2019 10:04:36 PM
...follow the steps below.
easing the move to Linux.
OK, we are a little torn on this tip.
There are a number of CLI things such as ls not dir, cat not type, mkdir not md, that take some time to get used to when shifting over to Linux.
It is not unlike how you may mistakenly write the prior year's year on checks and documents, for 6 months a few days after the calendar changes to January 1st.
One of those tough to forget items is that on Windows, either cd .. or cd.. will move you up a level in the folder tree. If you're really used to cd.., well, that one doesn't work on Linux. Linux treats cd.. as a unique command and of course, you'll get a "command not found" when trying that on Linux.
The torn aspect is – that there is a way on Linux to create an alias for commands, so that by typing cd.. cd .. is executed instead.
BUT – is that the right thing to do? Given a little bit of time, you'll shift to doing it "the Linux way" – it is just that the first bit of time getting used to it can be annoying.
BUT BUT! – if you're switching to Linux long-term, you might as well learn to do things the right way. Plus it will be potentially embarrassing to go to a colleague's Linux computer and be trying to DIR, MD, CD..-ing your way around.
That's our quandary.
So -- we will show you how to create an alias and let you decide how to approach this. We do so next by using leafpad, a simple notepad-ish text editor to open the hidden file bashrc. bashrc contains properties for how your bash session looks and behaves. (A '.' in front of a file or folder makes it hidden, by default).
or, when Free = Bad.
2019-06-21
Review: Google Chrome has become surveillance software. It's time to switch
https://www.mercurynews.com/2019/06/21/google-chrome-has-become-surveillance-software-its-time-to-switch/
HN Discussion: https://news.ycombinator.com/item?id=20254051
A tech columnist's latest privacy experiment found Google Chrome ushered more than 11,000 tracker cookies into our browser — in a single week.
Google Has A Secret Page That Records All The Things You've Bought Online
https://www.buzzfeednews.com/article/katienotopoulos/gmail-google-tracks-online-shopping
Gmail's "Purchases" page collects and sorts out all of your online shopping and in-app purchase receipts.
...tips to ease the move to Linux
Something that can be a pain on Windows is programmatically altering the screen resolution. Not through GUI tools, but running a script or simple utility to do so.
On Linux, xrandr is the answer to this. xrandr allows for live (re)configuration of the X server without restarting it.
You can start off with what what your current setup is by running xrandr:
$ # To just list out your current monitor(s) $ # (in BASH, '#' on a line is a comment -- and will not execute what follows.) $ xrandr --listmonitors Monitors: 1 0: +*DP-1 1680/531x1050/299+0+0 DP-1
Then you can move on to make changes.
$ # ... and then, to actually go forth to make the change: $ # $ xrandr --output DP-1 --mode 1440x900 $ $ # Like most CLI commands, if it works, there will be no output.
Using other xrandr parameters, you can alter the screen's refresh rate, the orientation of the monitor, which of multiple monitors is the preferred one, and a number of other settings.
See xrandr --help for more options.
...tips to ease the move to Linux
Nearly all of the functionality of Windows (or Mac) applications can be found in similar applications on Linux today; with the added-plus that many on Linux are free! Everything from Office [Excel, Word, Powerpoint, Access]; to Skype; to Photoshop, to even Quickbooks has Linux open source equivalents.
But there may be cases where you want to run a native Windows app on your Linux pc. That's where Wine comes in. Wine is an app that provides Windows APIs that your Windows programs use to run. There are a very large set of Windows apps that have been confirmed to run in Wine. The current list is available here.
winetricks is a helper script -- to help you with the proper settings for your Win app to run under Wine. A winetricks tutorial can be
found here.
(a collection of reasons to change things up.)
2019-04-03
Facebook Demanding Some New Users' Email Passwords
https://www.thedailybeast.com/beyond-sketchy-facebook-demanding-some-new-users-email-passwords
'Beyond Sketchy': Facebook Demands Users' Email Passwords
"Mark Zuckerberg admitted recently that Facebook doesn't have a 'strong reputation' for privacy. An odd new request for private data probably won't help with that rep.
Last year Facebook was caught allowing advertisers to target its users using phone numbers users provided for two-factor authentication; users handed over their numbers so Facebook could send a text message with a secret code when they log in. More recently the company drew the ire of privacy advocates when it began making those phone numbers searchable, so anyone can locate the matching user "in defiance of user expectations and security best practices," wrote the Electronic Frontier Foundation, a civil liberties group."
2019-04-29
Amazon Has Gone From Neutral Platform to Cutthroat Competitor
https://onezero.medium.com/open-source-betrayed-industry-leaders-accuse-amazon-of-playing-a-rigged-game-with-aws-67177bc748b7
...AWS is striking at the Achilles' heel of open source: "lifting" the work of others, ...These critics see Amazon's decision to recreate Elasticsearch as ...
2019-05-14
Adobe is now telling its users they can be sued for using old versions of photoshop
https://www.vice.com/en_us/article/a3xk3p/adobe-tells-users-they-can-get-sued-for-using-old-versions-of-photoshop
If you are still using CC, it might be time to consider alternatives which let you actually own software instead of renting it:
https://www.diyphotography.net/using-older-adobe-cc-apps-could-get-you-sued-adobe-warns/
2019-05-10 Adobe is no longer allowing subscribers to download previous versions of Premiere and is even sending notices to people who still have them installed to say they're no longer allowed to use them. pic.twitter.com/8t0tx8FTeO -- ASHLEY LYNCH (@ashleylynch) May 10, 2019
2019-05-23
Snapchat Employees Abused Data Access to Spy on Users
https://www.vice.com/en_us/article/xwnva7/snapchat-employees-abused-data-access-spy-on-users-snaplion
Multiple sources and emails also describe SnapLion, an internal tool used by various departments to access Snapchat user data.
When packages are installed through apt they are brought in as archive files (usually .deb files) by default and (most often) stored in the /var/cache/apt/archives folder. .deb files are archives, not unlike .zip files. Once on your system they are unpacked and installed by the install process.
If these archive files become corrupt -- for example, by a network disconnect as they are being brought down -- you can end up with the files with the archive not being able to be extracted and installed.
Here is an occurrence of that and the messages you'll see, using an issue we encountered with the libqt5webkit5 archive. This is followed by the fix.
You can first try install with the -f (fix) parm to attempt a fix.
$ sudo apt install -f
If that doesn't work, this will.
Remove the corrupt file from the archive cache.
The subsequent upgrade will determine the removed .deb file is needed for other things and then bring it down again fresh.
cd /var/cache/apt/archives $ sudo rm libqt5webkit5_5.7.1+dfsg-1_amd64.deb $ sudo apt-get update $ sudo apt-get upgrade
Although we do not recommend this, if you want to nuke ALL of the files in the archive cache -- then this command will do so.
$ sudo apt-get clean
However it is not a bad idea to keep those copies of the original install packages,
- as a backup,
- to know for sure what the source files were and
- as a place to reinstall from if your internet connection is down.
Finally, to remove the unwanted software dependencies,
$ sudo apt-get autoremove
"The following packages will be kept back"
If during a $ sudo apt-get upgrade
you see a message that
"The following packages will be kept back" followed by one or more package names, it means that the upgrade would require another package to be deleted, or a new package to be installed so that yours can be installed.
To actually complete the install, run:
sudo apt-get --with-new-pkgs upgrade
This will first alert you to what other packagers need to be deleted or installed to get your desired one installed.
... tips to ease the move to Linux
The default #debian GUI file manager is nemo.
$ nemo .
A couple of nemo tweaks that might be helpful are:
In nemo, click Edit ->
Preferences -> Behavior tab
[x] Click on file's name twice to rename it.
Preferences->Display tab
choose: [Permissions] rather than [None]
[x] Show the full path in the title bar and tab bars
[x] Show advanced permissions in the file properties
Preferences->Preview tab
Only for files smaller than: 2GB
Preferences->Toolbar tab
[x] Show refresh button
[x] Show new folder button
Other things:
There are many possible file managers that you can install to try out, but two in particular of interest are nautilus (which keeps us in that whole 20,000 leagues under the sea, thing) and the text-based CLI file manager ranger.
Ranger is fast and easy to move around in. Press enter over applications or files to run them. For example, you can easily move through .mp4 files this way; pressing "Q" to end the video and return to ranger. The left and right arrow keys quickly move you around the folder hierarchy. It's fun to use.
ranger
tips to ease the move to Linux
find is a CLI[^1] way to locate files.
$ sudo find / -type f -mtime 0 -mtime +1
the -mtime 0 -mtime +1 parameters say to find only files created today. 1 day old.
-mtime +3 would say to return the last 3 days worth of files.
Running as sudo (root) allows you to prevent seeing a large swath of warning messages, such as the following:
$ sudo find / -type f -mtime 0 -mtime +1 find: '/proc/23237/task/23237/fdinfo': Permission denied find: '/proc/23237/task/23237/ns': Permission denied find: '/proc/23237/task/23245/fd': Permission denied find: '/proc/23237/task/23245/fdinfo': Permission denied find: '/proc/23237/task/23245/ns': Permission denied find: '/proc/23237/task/23246/fd': Permission denied find: '/proc/23237/task/23246/fdinfo': Permission denied find: '/proc/23237/task/23246/ns': Permission denied find: '/proc/23237/fd': Permission denied find: '/proc/23237/map_files': Permission denied find: '/proc/23237/fdinfo': Permission denied find: '/proc/23237/ns': Permission denied find: '/sys/kernel/debug': Permission denied find: '/sys/fs/fuse/connections/8388626': Permission denied
[^1]:CLS = Command Line Interface
...tips to ease the move to Linux
If you are used to the Windows (DOS) tree command as a way to getting a complete, opened, list of folders and the folders & files they contain -- then good news.
tree exists on Linux as well.
$ tree
As always, try --help to see the various parameters available for tree.
$ tree --help
Just a few examples are:
-q Print non-printable characters as '?'.-N Print non-printable characters as is.
-Q Quote filenames with double quotes.
-p Print the protections for each file.
-u Displays file owner or UID number.
-g Displays file group owner or GID number.
-s Print the size in bytes of each file.
-h Print the size in a more human readable way
The last one there, -h, is pretty interesting as it includes a concise size of the file or folder on each line.
$ tree -h
In just a bit of info to store for later ... tree also has a -X parameter, which returns an XML representation of your files. This XML list can then be pulled into other applications or reports.
$ tree doc -X
...tips to ease the move to Linux
Here are several steps that admins can take before agreeing to install user-requested applications. A good goal would be to have the users do these things before coming to you...
$ apt-cache search thetoolname
will give you a list of related tools, that can be checked for the best fit.
$ apt-cache show thetoolname
will give you a blurb about what this particular tool/app is for.
Check the application on Debian's popcon ("popularity contest") to compare how many reported installations there are of the requested tool.
https://popcon.debian.org/
or
https://qa.debian.org/popcon.php?package=thetoolname
to directly search for your tool.
For example, for htop (which is a nice, clean-looking, command-line process viewer) popcon yields:
... along with the corresponding maintainer's page:
https://qa.debian.org/developer.php?package=htop
When installing -- always check over the dependencies. See if they would bring in something you may not want.
debsums is another tool for verification of installed package files against MD5 checksums
$ debsums -l *or* --list-missing
# list packages which don't have an md5sums file.
(It should really come back with nothing.)
$ debsums -c
All of this helps to give you a feel of the general acceptance of the tool in question.
One additional place to ask questions about packages is on the #debian user's email list.
Subscribe by sending an email to:
debian-user-REQUEST@lists.debian.org
with a subject of:
Subject: subscribe your@email.com
You can find past mailing list threads here:
https://lists.debian.org/debian-user/
And other #debian mailing lists here:
https://lists.debian.org/users.html
matrix.org compromised via hack
matrix.org was compromised via hack March 13, 2019...
The normally fairly reliable Matrix that provides secure, encrypted chat and communication applications has been hacked. Be sure to check their status before downloading any packages from the site.
From the Matrix page linked below is this quote:
Here's what you need to know.
An attacker gained access to the servers hosting Matrix.org. The intruder had access to the production databases, potentially giving them access to unencrypted message data, password hashes and access tokens. As a precaution, if you're a matrix.org user you should change your password now.
The matrix.org homeserver has been rebuilt and is running securely; bridges and other ancillary services (e.g. this blog) will follow as soon as possible. Modular.im homeservers have not been affected by this outage.
The security breach is not a Matrix issue.
Uh-huh.
Anyway -- for more info, see:
https://matrix.org/blog/2019/04/11/we-have-discovered-and-addressed-a-security-breach-updated-2019-04-12/
and
https://matrix.org/blog/2019/04/18/security-update-sydent-1-0-2
The current status (at any time, not just for this event) of Matrix is here:
https://status.matrix.org/
...tips to ease the move to Linux
Do you have a hankering for that nice mid-90s Green or Amber on black terminal look?
You can easily get there, along with a translucent background, using the Sakura terminal.
$ sudo apt-get install sakura
$ nano ~/.config/sakura/sakura.conf
There are up to 6 different color schemes that you can alter. You can cycle through these in Sakura by pressing shift-ctrl-F1 through shift-ctrl-F6.
Alter these simple scheme with the below RGB (Red, Green, Blue) colors.
# green
51,255,0
with 0, 0, 0, 0.9 back
which is black with an opacity of 90%.
# amber
255,176, 0 with 0, 0, 0, 0.9 back
...tips to ease the move to Linux
If you'd like to compare folders on drives, text files, binary files or folders; then meld will provide side-by-side comparisons for up to three! choices at once.
http://meldmerge.org/
A meld compare can also be started right from the command line. Here two folders are compared from the CLI. his will kick off the meld GUI and auto-start the compare.
$ meld "/media/user/5TB (external)/home/user" /home/user
...tips to ease the move to Linux
Finding the largest file using the command line:
Within a given folder
$ ls -rSl -h
In all folders, but only sorted BY biggest to smallest within each folder.
(that is, you can't sort a whole drive biggest to smallest with ls.)
$ ls -rSl -h -R
The following find command will recursively find all files (that's the type f parm) in all sub directories of ".". "." is your current folder &emdash; you can specify a different folder here. du -h is run. du shows disk usage and the -h parameter gives human readable results. So for example, 1.1G instead of 1138884 thousand bytes.
Finally the output piped ("|") through a filter to be sorted again.
$ find . -type f -exec du -h {} + | sort -r -h
...tips to ease the move to Linux
f.lux is an excellent time-of-day screen-brightness auto-correction tool for Windows, Mac and kinda for Linux. f.lux automatically adjusts the monitor's color and brightness so that you are not blinded with bright blue light at night. Or the first thing in the morning.
While a nice tool, the f.lux Linux setup is several fussy steps and the package itself IS NOT in the #debian repository. That itself is not death of course -- but it does remove all of those extra repository eyes that help to keep track of whether applications are broken, hacked, etc.
So.
We recommend redshift for reducing that bright blue glare at night.
$ sudo apt-get install redshift
$ sudo apt-get install redshift-gtk
# adds a taskbar monitor option.
To run it, you'll want to dissassociate the app from the terminal like this:
$ (redshift-gtk & disown) && exit
If you right click on the taskbar icon you can choose to [x] autostart redshift with the system.
tips to ease the move to Linux
So, if you have just landed in Linux, coming from the GUI of the Mac or Windows, we wanted to gather some immediate tips here.
It's mkdir, not md for command line folder creation. (This one will take some time to change to if you have spent a lot of time in DOS/Command prompt :-)
| more in DOS to control the flow of moving through a text file is | less (!) in Linux. And type
is cat
.
So for example,
where you would do c:\>type help.txt | more
in DOS
it would be $ cat help.txt | less
in Linux.
| less is a lot more powerful as you can scroll back and forth within the text file you are viewing.
Most linux commands come with a small-to-large reference available by typing man
in front of the command. "man" for manual. Therefore
man cat
will help you with other parameters available for "cat".
$ man cat
NAME cat - concatenate files and print on the standard output
SYNOPSIS cat [OPTION]... [FILE]...
DESCRIPTION Concatenate FILE(s) to standard output.
With no FILE, or when FILE is -, read standard input.
-A, --show-all
equivalent to -vET
-b, --number-nonblank
number nonempty output lines, overrides -n
-e equivalent to -vE
-E, --show-ends
display $ at end of each line
Manual page cat(1) line 1 (press h for help or q to quit)
Tip | ||
---|---|---|
mkdir | If you have been a command line person in the past, you'll know that md is the way to make a folder (directory) in the Windows command prompt (DOS). mkdir is the way to do it in Linux. mkdir brings with it options to make all folders in a given path, if needed. So, if you do Not have a /home/lavojo/files/places-to-keep/tips folder then entering mkdir -r /home/lavojo/files/places-to-keep/tips will create all of the folders necessary in one shot. |
|
/home | It is |
|
cmp | Much like fc on DOS, cmp is used to compare files on the command line. | |
use ls. ls -larth for the familiar DOS-type listing of files. |
...tips to ease the move to Linux
. Cost-trap
Microsoft prefers to issue OEM Windows licensing, which binds the license to one particular machine. If you purchase a new machine, you can not re-use the existing OEM license that you paid for -- you must buy a brand new one.
In some cases you can not even upgrade hardware on an EXISTING Windows machine without having to purchase a new Windows license.
Given that many, many software applications exist now that are cross-platform, such as LibreOffice, you are no longer stuck with using just Windows -- especially for common business tasks. We can even show you how to get refunded for having to pay for a copy of Windows on most new machines (this is called "The Windows Tax").
. Sneakiness
When Microsoft rolled out Windows 10 -- they did a number of ~evil things. This included changing the meaning of the [x] on the close-dialog of the annoying, repeated, "Do you want to install Windows 10 now" dialog to mean -- "Yes, I want to install it". Something that [x] never meant in the past. And this was done without telling people and mid-stream; so after clicking [x] many times to STOP Windows from updating, [x] now did the reverse.
Windows 10 will also turn back on settings to share your data; even if you've switched them off. Typically, this has been happening during Windows 10 updates; which the user can no longer prevent from happening. This obliterates true 'User Choice'.
As can be seen in the below images, there are literally hundreds (thousands?) of Linux distributions (distros).
Anyone who wants to -- and possessing the proper gumption -- can fork (clone) a copy of most open source software, so that they can make the changes that they feel are important. This can range from fixing one problematic bug all the way to coming up with something that looks and/or acts very differently from the original.
This is a strength in four ways:
If you want to alter open software -- you can do so.
distros exist that are specialized for many purposes. There are graphics ones; HAM radio ones; distros for Writers, for Doctors, for Firemen, etc!
Changes made to forked distros often bubble back up to the original; strengthening ALL related distros.
If someone falters; there are other choices.
Distributions
Distrowatch.com is a great place for news on distros, new releases, which are hot and more.
...tips to ease the move to Linux
If your office is moving from Windows (or Macs) to Linux -- existing portable drives raise some issues. Here are some of the things to be aware of; along with some ideas on managing drives going forward.
First, the good news is ... existing Windows NTFS drives will work with most #debian-based distros. Most everything works as expected: Thumbnails for media, file properties, etc.
However, there are some differences. NTFS and EXT4 (the most common #debian file system -- and the one that we recommend) handle some internal dates, like "date accessed" differently. File ownership, access rights and rights in general are handled a little differently. But by and large - you can copy a file from an EXT4 file partition to a NTFS one (and back) and it works as you expect.
Linux file and drive utilities will work on the NTFS drives.
BUT! drives actually formatted for EXT4 will not work on Windows machines without being reformatted for NTFS.
EXT4 is actually a superior file system to NTFS. NTFS has a number of bugs, and EXT4 is journaling file system; which verifies everything as files change and are moved around. Plus there is simply a multitude of Linux tools available to work with files and drives in any way that you can conceive of.
If you are going to have users mixing and matching external drives between Windows and Linux here are two ways to help to know what is what and where what is... cough ...
a. Set up two partitions on each drives. One NTFS and one EXT4. Have the Windows folks write to the NTFS partition only and do the same with the Linux users but on the EXT4 portion. Yes, they will occasionally write to the wrong place. Well, the Linux users may do so, but the Windows users will never be able to even get to the EXT4 areas. By-and-large the right stuff will be in the right places and users will learn to check for the correct spot to write to.
b. A second, and an easy to figure out what-is-what way for both users and admins, is to choose a color drive for Linux and a DIFFERENT color for Windows. Perhaps even using a third color for MACs. Everyone will know what is on a drive by the color and then you can even format Linux drives with EXT4 without causing problems for non-Linux users.
A physically colored drive is better than having sticky-notes on them or trying to write on the drives with a sharpie.
#drive #health
Just to briefly note these two tools here, as we will cover these in more depth in the future.
Here are two drive monitoring tools; that can be used an early warning system for drive sector errors; excessive failed writes; or pending total drive failure.
smart-notifier - graphical hard disk health status notifier
smartmontools - control and monitor storage systems using S.M.A.R.T.
...not the happy, fluffy things that they are cracked up to be
You won't know if you got hacked
"Everything gets hacked, whether it is by malicious actors using vulnerabilities in a system or through very basic phishing emails. Despite all your efforts to choose the right online storage solution, you could still get hacked. In that case it is essential for you to be aware of the hack as quickly as possible, as you probably want to be able to take action immediately and limit potential damage.
Big companies are not famous for warning their customers after a hack if they can avoid it. They will likely hope that the hack will stay unnoticed so they can keep their users' trust, as it has happened in the past. The only way to make sure you are aware of any incursion on the server where you store your files is to have control over your own infrastructure and be able to monitor what happens with your data."
https://nextcloud.com/blog/the-issue-with-public-cloud/
Bad Cloud examples:
2019-01-26 Make Sure to Download Your Flickr Photos This Weekend - Because this Cloud Service will be deleting everything over 1,000 of your Photos if you are on the Free Account
If you have over 1,000 photos uploaded to Flickr, then you should download them now or risk losing them forever.
Back in April of last year, Yahoo sold Flickr to the company SmugMug. In November SmugMug announced it planned to end the free unlimited image storage that Flickr offered users in January, and instead limit users to 1,000 photos worth of storage for free.
If you have more than 1,000 images on Flickr, then it's a really big deal. Starting February 5th those extra photos are going to be deleted, starting with your oldest ones.
See https://lifehacker.com/make-sure-to-download-your-flickr-photos-this-weekend-1832073708
I spent weeks deleting over 10,000 of my photos off Flickr and now host them on my own hosting at https://photos.gadgeteer.co.za...
2019-06-10 US Customs And Border Protection's Database Of Traveler Photos Was Stolen In A Data Breach
"CBP learned that a subcontractor ... transferred copies of license plate images and traveler images collected by CBP to the subcontractor's company network. The subcontractor's network was subsequently compromised by a malicious cyber-attack."
We have never been a proponent of cloud computing; where "cloud" can simply be defined as someone elses' computer.
The reasons were:
Criminal hacking,
(even your own) state-sponsored criminal hacking,
#Idiot-moves — like the one below; which can happen if you do not control your data.
Database of Over 198 Million U.S. Voters Left Exposed On Unsecured Server link
We have dis-avowed the cloud since the first trumpeting of this marketing-oriented-name emerged.
Our clients are still on dedicated machines, or on VMs &emdash; and we Will NOT use:
- AWS (with it's 600-million dollar CIA contract);
- Azure (with Microsoft being the #1 entrant into the NSA's spying program);
- or iCloud for any reason.
Files that are encrypted today, done in any manner, will be easy fodder for quantum computers soon enough; and grouping them all together in someone else's cloud where THEY control the access to the files is just … well … a disaster waiting to happen.
People (read that: Companies) who put things in the cloud damn well deserve what they will get.
But God bless 'em anyway.
2017-09-22
Verizon Wireless Internal Credentials, Infrastructure Details Exposed in Amazon S3 Bucket
https://threatpost.com/verizon-wireless-internal-credentials-infrastructure-details-exposed-in-amazon-s3-bucket/128108/
Verizon is the latest company to leak confidential data through an exposed Amazon S3 bucket.
2017-10-05 (update)
Yahoo says all 3 billion accounts hacked in 2013 data theft
https://www.zdnet.com/article/yahoo-believes-3-billion-affected-by-2013-hack/
"Yahoo on Tuesday said that all three billion of its accounts were hacked in a 2013 data theft, tripling its earlier estimate of the size of the largest breach in history and sharply increasing the legal exposure of its new owner, Verizon."
2017-12-19
Every Single American Household Exposed in Massive Leak
https://www.infosecurity-magazine.com/news/every-single-american-household/?utm_content=buffereb7a9&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
Yet another Amazon S3 cloud storage misconfiguration has affected 123 million Americans, across billions of data points.
Hacker News Comments: https://news.ycombinator.com/item?id=15965060
2018-01-03
Degraded performance after forced reboot due to AWS instance maintenance
https://forums.aws.amazon.com/thread.jspa?threadID=269858
Hacker News Comments: https://news.ycombinator.com/item?id=16064611
2018-01-09
Security flaw in CPU's breaks isolation between cloud containers
https://diasp.org/posts/0b6b25a88fe8fc1ca17821f669c2004d67df5841
2018-01-09
Hardcoded Backdoor Found In WD My Cloud NAS With Username "MyDlink"
https://fossbytes.com/hardcoded-backdoor-wd-mycloud-devices-username-mydlink/
"In yet another revelation of severe loopholes, a security researcher James Bercegay from Gulftech has discovered a backdoor in some models of the My Cloud NAS (Network-attached storage) drive family, manufactured by Western Digital. According to the blog post, the vulnerabilities, which include a hardcoded backdoor, can be used to access files even on a […]"
2018-01-12
"You trust the cloud?"
https://blog.jospoortvliet.com/
"What surprised me a little was how few journalists paid attention to the fact that Meltdown in particular breaks the isolation between containers and Virtual Machines - making it quite dangerous to run your code in places like Amazon S3. Meltdown means: anything you have ran on Amazon S3 or competing clouds from Google and Microsoft has been exposed to other code running on the same systems.
And storage isn't per-se safe, as the systems handling the storage just might also be used for running apps from other customers &emdash; who then could have gotten at that data. I wrote a bit more about this in an opinion post for Nextcloud.
We don't know if any breaches happened, of course. We also don't know that they didn't.
That's one of my main issues with the big public cloud providers: we KNOW they hide breaches from us. All the time. For YEARS. Yahoo did particularly nasty [things], but was it really such an outlier? Uber hid data stolen from 57 million users for a year, which came out just November last year."
2018-02-06
Leaky Amazon S3 Bucket Exposes Personal Data of 12,000 Social Media Influencers
https://threatpost.com/leaky-amazon-s3-bucket-exposes-personal-data-of-12000-social-media-influencers/129810
2018-02-08
Gojdue Variant Eludes Microsoft, Google Cloud Protection, Researchers Say
https://threatpost.com/gojdue-variant-eludes-microsoft-google-cloud-protection-researchers-say/129837
2018-03-30
Under Armour App Breach Exposes 150 Million Records
https://www.darkreading.com/endpoint/privacy/under-armour-app-breach-exposes-150-million-records/d/d-id/1331411?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple
A breach in a database for MyFitnessPal exposes information on 150 million users.
2018-05
LA County Nonprofit Exposes 3.2M PII Files via Unsecured S3 Bucket
https://www.informationweek.com/whitepaper/cybersecurity/security/the-biggest-cybersecurity-breaches-of-2018-(so-far)/399463?gset=yes&cid=cybr&_mc=cybr
"A misconfiguration accidentally compromised credentials, email addresses, and 200,000 rows of notes describing abuse and suicidal distress."
(more) https://www.darkreading.com/cloud/la-county-nonprofit-exposes-32m-pii-files-via-unsecured-s3-bucket/d/d-id/1331875?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple
2018-05-30
Honda India Left Details of 50,000 Customers Exposed on an AWS S3 Server
https://www.bleepingcomputer.com/news/security/honda-india-left-details-of-50-000-customers-exposed-on-an-aws-s3-server/
https://gbhackers.com/honda-leaked/
"Honda Car India has left the personal details of over 50,000 users exposed on two public Amazon S3 buckets, according to a report published today Kromtech Security. […]"
"Honda Car India leaked over 50,000 users Personal information of it's Honda Connect App which is stored in the publicly unsecured Amazon AWS S3 Buckets. Experts recently discovered two public unsecured Inside of the AWS Bucket contains an unprotected database which maintained by Honda Connect App. Honda-Connect is a smartphone app that boasts that it gives the user […]"
2018-06-04
Google Groups Are Leaking Your Sensitive Emails: Here's How To Fix It
https://fossbytes.com/how-to-fix-google-groups-misconfiguration/
"If you are using Google Groups, you need to check your privacy settings right now and make sure that the configuration doesn't leak any sensitive information. This message comes from Kenna Security which found that nearly one-third of 9,600 public Google Groups leaked sensitive information in emails sent through the platform. The security firm found such public […]"
2018-06-04
What is wrong with Microsoft buying GitHub
https://news.ycombinator.com/item?id=17225599
"According to Bloomberg [1]Microsoft is said to have agreed to buy GitHub. [2]GitHub which reportedly has been losing money being acquired is a major development because of its central role in the development of many open and closed source projects.
For the uninitiated here is what GitHub does in a nutshell: GitHub allows computer programmers from around the world to conveniently collaborate on projects, share bug reports and fix those bugs and allows the administration of some project documentation. The company provides this service for free to entities that provide their code for free to the world and for 'closed source' projects there is a fee to be paid. GitHub is in essence a friendly wrapper around [3]Git, an open source version control system written by Linus Torvalds (of Linux fame) and many others. Git already does decentralized repository hosting out of the box but it does not support any kind of discovery method, bug tracking or documentation features, GitHub built a community of programmers around Git and many open source contributors consider GitHub too big to fail.
Companies that are too big to fail and that lose money are a dangerous combination, people have warned about GitHub becoming as large as it did as problematic because it concentrates too much of the power to make or break the open source world in a single entity, moreso because there were valid questions about GitHubs financial viability. The model that GitHub has - sell their services to closed source companies but provide the service for free for open source groups - is only a good one if the closed source companies bring in enough funds to sustain the model. Some sort of solution should have been found - preferably in collaboration with the community -, not an 'exit' to one of the biggest sharks in the tank.
So, here is what is wrong with this deal and why anybody active in the open source community should be upset that Microsoft is going to be the steward of this large body of code. For starters, Microsoft has a very long history of abusing its position vis-a-vis open source and other companies. I'm sure you'll be able to tell I'm a cranky old guy by looking up the dates to some of these references, but 'new boss, same as the old boss' applies as far as I'm concerned. Yes, the new boss is a nicer guy but it's the same corporate entity. Some concrete examples of the things Microsoft have done:
Abuse of their de facto monopoly position to squash competition, including [4]abuse of the DD process to gain insight into a competitors software
Bankrolling the [5]SCO Lawsuit that ran for many years in order to harm Linux in the marketplace
Abuse of their monopoly position to unfairly compete with other browser vendors, including [6]Netscape
Subverting open standards with a policy of [7]Embrace, Extend, Extinguish
The recent [8]Windows 10 Telemetry abuse
The acquisition of Skype, after which all the peer-to-peer traffic was routed through Microsoft, essentially allowing them to snoop on the conversations. To pre-empt the technical counter argument that this was done to improve the service: It only improved the service for some edge cases, for everybody else the service got worse because of the extra round-trip latency. So if that was the real reason then you'd have expected to see the traffic routed to the central servers only if one of those edge cases was detected.
Unfair advantage over competitors by using internal APIs for applications unavailable for competing products
Tied-sales and bundling
Abuse of [9]Patents
The list is endless. So, this is the company that you want to trust with becoming the steward of a very large chunk of the open source world? Not me. And for all you closed source customers of GitHub, do you really want the company that abused a due-diligence process faking an acquisition interest to have the inside scoop on your code?
I've deleted my GitHub account, I'll find a way to replace it and if you're halfway clever so should you. Foxes may change their coats, they don't change their nature."
References
https://www.bloomberg.com/news/articles/2018-06-03/microsoft-is-said-to-have-agreed-to-acquire-coding-site-github
https://github.com/
https://en.wikipedia.org/wiki/Git
https://en.wikipedia.org/wiki/Stac_Electronics
https://en.wikipedia.org/wiki/SCO-Linux_disputes
https://en.wikipedia.org/wiki/Browser_wars
https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish
https://www.independent.co.uk/life-style/gadgets-and-tech/news/windows-10-sends-personal-data-to-microsoft-even-if-users-tell-it-not-to-10453549.html
https://www.computerworld.com/article/2560825/enterprise-applications/microsoft-fat-patents-upheld.html
2018-06-05
MyHeritage Alerts Users to Data Breach
https://www.darkreading.com/myheritage-alerts-users-to-data-breach/d/d-id/1331966?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple
A researcher found email addresses and hashed passwords of nearly 92.3 million users stored on a server outside MyHeritage.
"MyHeritage, a platform designed to investigate family history, learned of a data breach on June 4, 2018. It reports the incident affected email addresses and hashed passwords of nearly 92.3 million users who signed up for the site before and including Oct. 26, 2017, the date of the incident.
A security researcher discovered a file named "myheritage containing email addresses and passwords on a private server outside the site. Further analysis found the file was legitimate, with the data originating from Myheritage. No other data was detected on the server, and there was no evidence of account compromise. MyHeritage handles billing through third parties and stores sensitive data such as DNA and family trees on segregated servers with added security."
2018-06-07
Ticketfly cyberattack exposed data belonging to 27 million accounts
https://www.zdnet.com/article/ticketfly-cyberattack-exposed-data-belonging-to-27-million-accounts/#ftag=RSSbaffb68
Financial information is thought to be safe.
2018-06-27 A little-known Florida company may have exposed the personal data of nearly every American adult, according to a new report.
"Wired reported Wednesday that Exactis, a Palm Coast, Fla.-based marketing and data-aggregation company, had exposed a database containing almost 2 terabytes of data, containing nearly 340 million individual records, on a public server. That included records of 230 million consumers and 110 million businesses.
"It seems like this is a database with pretty much every U.S. citizen in it," security researcher Vinny Troia, who discovered the breach earlier this month, told Wired. "I don't know where the data is coming from, but it's one of the most comprehensive collections I've ever seen", he said."
https://www.marketwatch.com/story/a-new-data-breach-may-have-exposed-personal-information-of-almost-every-american-adult-2018-06-27
2018-06-29
A massive cache of law enforcement personnel data has leaked
https://www.zdnet.com/article/a-massive-cache-of-law-enforcement-personnel-data-has-leaked/#ftag=RSSbaffb68
Exclusive: The data revealed that some police departments are unable to respond in an active shooter event.
2019-06-27
Open Marketing Database Exposes 5 Million Personal Records
https://www.bleepingcomputer.com/news/security/open-marketing-database-exposes-5-million-personal-records/
An unsecured MongoDB instance belonging to health insurance marketing website MedicareSupplement.com was discovered online last month containing as many as 5 million records. The data cache included personal information as well as health details.
...tips to ease the move to Linux
A popular Windows tool is WinDirStat, which shows a graphical display of the files on a drive or folder. This is great for quickly finding large uses of disk space; largest files; etc.
Perhaps the Linux tool that looks the most like WinDirStat is gdmap.
But gdmap lacks a right-click menu or similar approach to be able to View a particular file; delete the file; or even to open a folder to the corresponding file -- for you to do so on your own. It looks pretty; but does not work pretty.
An interesting alternative to try is baobab.
baobab optionally shows a text list of the files off to the right; or you can choose just to deal with the graphical version on the right. You can drill into folders and open the folder holding a given file, right from the graphical presentation.
To install baobab
$ sudo apt-get install baobab
Caution
When adding things outside of Linux to Linux, you are (vastly) increasing the chance of backdoors, viruses, and a myriad of related problems. Running Windows programs within Linux is a good way to get there!
Proprietary software, like Windows, is most-often closed source, meaning that people outside of Microsoft do not get to review the actual code for bugs, errors, omissions or backdoors.
A backdoor is a portion, or many portions!, of the code left open to hackers; either intentionally or by error. A backdoor can be used in many ways, by known or unknown people. It can be used to compromise your system, your contacts, other data and even your hardware (see Stuxnet).
This issue is not isolated to operating systems though. Using any closed-source software carries the same risks.
With open source software -- anyone can review the code for errors and security holes such as backdoors. You have multiple sets of eyes looking for problems and this has been a tremendously successful way of improving software.
By default -- some Linux distributions default to including only free software. #Debian is an example of this.
...tips to ease the move to Linux
The classic Advanced Bash-Scripting Guide:
http://tldp.org/guides.html#abs
Beginners might prefer the Bash Guide for Beginners:
http://tldp.org/guides.html#bbg
"The Bash manual page is concise because it is a Unix manual page. Unix manual pages are supposed to be concise, because they are meant to be reference documents, not tutorials. In the GNU project, this is what the Info documentation is for."
https://mywiki.wooledge.org/BashGuide
"This guide aims to aid people interested in learning to work with BASH. It aspires to teach good practice techniques for using BASH, and writing simple scripts.
This guide is targeted at beginning users. It assumes no advanced knowledge -- just the ability to login to a Unix-like system and open a command-line (terminal) interface. It will help if you know how to use a text editor; we will not be covering editors, nor do we endorse any particular editor choice. Familiarity with the fundamental Unix tool set, or with other programming languages or programming concepts, is not required, but those who have such knowledge may understand some of the examples more quickly."
https://pubs.opengroup.org/onlinepubs/9699919799.2018edition/utilities/V3_chap02.html
open source-ish desktop, laptop and server options.
https://zareason.com/Desktops/
https://www.ubuntushop.be/index.php/en/opensource-notebooks/debian-notebooks.html
Clicking 'X' to dismiss Windows 10 upgrade doesn't stop install
... Microsoft has steadily made it more difficult to opt out of Windows 10 upgrades. The company has reworked its installation messages to imply that consumers couldn't opt out of upgrading, but clicking on the red "X" at the top right of those messages still canceled the process. According to reports streaming in from multiple sources online, Microsoft has changed this behavior. Clicking the X does nothing to stop the upgrade process now.
...tips to ease the move to Linux
watch allows you to repeat a command, in place, every n-seconds.
For example, if you're waiting to see when a web server log file is hit, you can run this watch command, using ls. By default it will run every 2 seconds.
$ watch ls logs -lart
The screen will not roll up on you.
The information is just updated in place. (very handy).
Every 2.0s: ls logs -lart total 12 drwxr-xr-x 6 root root 4096 May 23 20:45 .. -rw-r--r-- 1 root root 0 May 24 14:06 acc_rights.log drwxr-xr-x 2 root root 4096 May 24 14:06 . -rw-r--r-- 1 root root 320 May 24 14:14 app_errors.log
You can have the changes highlighted (-d),
alter the seconds to wait between updates (-n)
and more.
Press ctrl-c to end the watch.
or, when Free = Bad.
Gmail tracks the history of things you buy, and it's hard to delete https://www.cnbc.com/2019/05/17/google-gmail-tracks-purchase-history-how-to-delete-it.html
Google collects the purchases you've made, including from other stores and sites such as Amazon, and saves them on a page called Purchases.
HN Discussion: https://news.ycombinator.com/item?id=19942219
Hacker News ( unofficial ) Gmail confidential mode is not secure or private
https://protonmail.com/blog/gmail-confidential-mode-security-privacy/
HN Discussion: https://news.ycombinator.com/item?id=20242637
Organizations/companies switching to open source
S. Korean government to switch to Linux: ministry
By Kim Arin, May 17, 2019
The government will switch the operating system of its computers from Windows to Linux, the Ministry of the Interior and Safety said Thursday.
http://www.koreaherald.com/view.php?ud=20190517000378
The Interior Ministry said the ministry will be test-running Linux on its PCs, and if no security issues arise, Linux systems will be introduced more widely within the government.
WE VALUE PRIVACY
Both yours and ours
The information that we collect:
If you sign-up online, we then store the information that you provide to sign-up with. If you provide an email address to receive newsletters, we then retain your email address to send the newsletters to you.
We store standard browser metrics to understand if our pages have problems and also to optimize for the types of systems (mobile vs desktop; Linux |Windows |Mac |mobile OS |etc) using our site. This includes IP and referrer data; which are useful to watch for hacking attempts.
We send|sell|give|share this information with NO OTHER company or organization. If we are forced to by court orders, then we must do so.
We do not use tools from other companies (who provide them mostly to track users everywhere).
- That means, you will find no Google analytics here (or Google-anything),
- no external site usage/tracking software,
- no Facebook hooks, that Facebook then gleans your info from,
- no other feeds of your data to outside spots.
We do not use Paypal and we will never email you to ask for credit card or related information.
(if you choose to do so) we store two cookies, which contain your preferences for browsing the blog. If you do not set these preferences, no cookie is ever stored. If you have a log in, these preferences are stored in a database rather than a cookie.
These cookies are only used to allow guest users to view the blog the way they wish to; and can be deleted at any time.
How information we store is protected
We use SSL/TSL connections to encrypt data between you and our servers.
We store user information in encrypted databases.
Privacy policy updates.
We do not anticipate many changes to the above text, as this has been our policy for 20+ years; but if changes are needed they will be made to this page.
We really try not to send out information by bulk email that is not 100% necessary to all users; but if you would like to have us email you with any privacy policy changes, we would be pleased to do so. contact us here for that.
Also -- if you have questions or constructive suggestions about our privacy policy, we welcome you sending them via our contact page.