commit 89edcc5866b31c74951b0ea83583be70ce396cee Author: RinRi Date: Sat May 27 14:00:46 2023 +0300 initial commit diff --git a/week1/lab1-solution.html b/week1/lab1-solution.html new file mode 100644 index 0000000..73d6eb8 --- /dev/null +++ b/week1/lab1-solution.html @@ -0,0 +1,568 @@ + + + + + + + +Lab 1 Amirlan Sharipov (BS21-CS-01) + + + + + +
+

Lab 1 Amirlan Sharipov (BS21-CS-01)

+ + +
+

1. About this document

+
+

+I use Org mode in Emacs to write documents and will use it throughout the course. +It has a super useful feature: it can evaluate the code on the fly and save results in the buffer. +Also it’s easy to use latex inside Org mode. That’s why I will use Org mode for this course. +If there are any problems with that, please report in the comments in Moodle. +

+
+
+ +
+

2. Exercise 1

+
+

+Code: +

+
+
lsb_release -a
+
+
+ +
+-e LSB Version:	n/a
+-e Distributor ID:	Arch
+-e Description:	Arch Linux
+-e Release:	rolling
+-e Codename:	n/a
+
+ + +

+Code: +

+
+
whoami
+
+
+ +
+rinri
+
+ + +

+Code: +

+
+
users
+
+
+ +
+rinri
+
+ + +

+Code: +

+
+
pwd
+
+
+ +
+/home/rinri/edu/sna
+
+ + +

+Code: +

+
+
ls -la
+
+
+ +
+total 224
+drwxr-xr-x  2 rinri users   4096 Feb  2 15:47 .
+drwxr-xr-x 25 rinri users   4096 Feb  2 14:13 ..
+-rw-r--r--  1 rinri users  20424 Feb  2 16:32 lab1.html
+-rw-r--r--  1 rinri users   6592 Feb  2 16:32 lab1.org
+-rw-r--r--  1 rinri users 190030 Feb  2 15:45 lab1.pdf
+
+ + +

+Code: +

+
+
cd ~/library
+ls -la
+
+
+ +
+total 17260
+drwxr-xr-x   2 rinri users    4096 Jan  8 10:47 .
+drwx--x---+ 74 rinri users    4096 Feb  2 16:31 ..
+-rw-r--r--   1 rinri users 6556637 Jan  8 10:47 Andrew S. Tanenbaum - Modern Operating Systems.pdf
+lrwxrwxrwx   1 rinri users      38 Aug  3  2022 cormen-algos.pdf -> /home/rinri/data/docs/cormen-algos.pdf
+lrwxrwxrwx   1 rinri users      93 Aug  3  2022 genki -> /home/rinri/data/docs/Banno E., Ikeda Y., Ohno Y., Shinagawa Ch., Tokashiki K. - Genki - 2020
+-rw-r--r--   1 rinri users  213363 Jan  8 10:47 ipfs-p2p-file-system.pdf
+-rwxr-xr-x   1 rinri users      66 Aug  3  2022 library.sh
+-rw-r--r--   1 rinri users 2658531 Jan  8 10:47 Stroustrup B. - A Tour of C++ - Second Edition - 2018.pdf
+-rw-r--r--   1 rinri users 8220353 Jan  8 10:47 TRENCH_FREE_DIFFEQ_I.PDF
+
+ +

+Code: +

+
+
cat /etc/shells
+
+
+ +
+# Pathnames of valid login shells.
+# See shells(5) for details.
+
+/bin/sh
+/bin/bash
+/bin/zsh
+/usr/bin/zsh
+/usr/bin/git-shell
+/bin/dash
+
+ + +

+Code: +

+
+
echo "$SHELL"
+
+
+ +
+/bin/zsh
+
+
+
+ +
+

3. Questions

+
+
+
+

3.1. Using hostname command:

+
+

+Code: +

+
+
hostname
+
+
+ +
+akemi
+
+
+
+ +
+

3.2. Arch Linux.

+
+

+It’s a rolling-release distribution. That’s why there is no “version”. I’ve been using it for several years. Used lsb_release -a command to check the info. +

+
+
+ +
+

3.3. The root directory is “/”

+
+ +
+

3.4. /bin/bash vs /bin/sh

+
+

+/bin/bash is a path to the bash shell. Whereas /bin/sh, on most of the systems, is a symbolic link to a POSIX-compliant shell. In many cases, it’s linked to bash. On my machine, it’s dash (it’s usually faster if the script is POSIX-compliant) +

+ +

+Code: +

+
+
ls -l /bin/sh
+
+
+ +
+lrwxrwxrwx 1 root root 4 Jul  3  2022 /bin/sh -> dash
+
+
+
+ +
+

3.5. Bash manual

+
+
    +
  1. –verbose - When verbose mode is used, bash doesn’t hide extra information (prints it), including all the steps done of a script, bashrc commands, and other info.
  2. +
  3. –help - shows help message
  4. +
  5. –rcfile file - use “file” as a initialization file instead of ~/.bashrc
  6. +
+
+
+
+

3.6. Linux distributions I want to try

+
+
+
+

3.6.1. NixOS

+
+

+NixOS uses a unique package manager called nix that solves many problems of common package managers (e.g. apt), including dependency hell. +Apart from the package manager, NixOS has single configuration file for the entire system. +Moreover, NixOS saves different “states” of the OS, and a user can rollback, for example, to the yesterday’s state of the OS if something breaks. +

+
+
+
+

3.6.2. Gentoo Linux

+
+

+Gentoo Linux also uses a unique package manager called Portage. +To install software on Gentoo, Portage builds most of the packages from source and allows user to optimize the software for their own needs. +Gentoo also allows to use OpenRC as an init system instead of systemd. Even though OpenRC doesn’t have many features of systemd, it’s significantly lighter and simpler than systemd. +

+
+
+
+

3.6.3. Artix Linux

+
+

+Artix Linux is essentially Arch Linux, but it gives several options for the init system, including OpenRC, runit, s6, and others. +

+
+
+
+

3.6.4. LFS

+
+

+LFS allows a user to build their own Linux distribution. I think it allows a user to learn many things about Linux. +

+
+
+
+

3.6.5. Alpine Linux

+
+

+Alpine Linux is a lightweight Linux distribution, since it uses musl libc instead of glibc and busybox instead of GNU coreutils. It’s widely used in Docker Images, thus it’s useful to learn Alpine Linux. +

+
+
+
+
+

3.7. POSIX

+
+

+POSIX is a family of standards created to maintain compatibility between operating systems. +For example, POSIX-compliant shell I mentioned earlier, is a shell that does things as mentioned in the POSIX standard for shells. +If a script is POSIX-compliant, any POSIX-compliant shell can run it without issues and the script usually starts with #!/bin/sh +Some of the information is taken from Wikipedia. +

+
+
+
+

3.8. Advantages of POSIX standards

+
+

+If a program is written with POSIX in mind, then it should work on other POSIX OSes too. Thus the portability increases. +Since POSIX standards are public, everyone can create programs that can work and communicate with other POSIX-compliant programs. +

+
+
+
+

3.9. Slackware vs Debian

+
+

+Both distributions are old (29 years old), but both are still maintained. +

+ +

+Slackware tries to be stable and simple, thus makes as few changes to the software as possible. It uses pkgtool package management system. +There are not that many packages available in Slackware, but users can use third-party repositories to install software or update the system. +Slackware has a small team of developers, whereas Debian is an popular distribution, that has many maintainers. +

+ +

+Debian stable is widely used on servers, thanks to its stability and long-term support, while Debian unstable (rolling-release) and testing are used on PCs. +Debian uses apt (with dpkg) package management system. There are many Linux distributions based on Debian, e.g. Ubuntu, MX linux, etc. +

+
+
+
+

3.10. uname -a

+
+

+Code: +

+
+
uname -a
+
+
+ +
+Linux akemi 6.1.8-arch1-1 #1 SMP PREEMPT_DYNAMIC Tue, 24 Jan 2023 21:07:04 +0000 x86_64 GNU/Linux
+
+ + +
    +
  1. Kernel name: +Linux
  2. +
  3. Hostname: +akemi
  4. +
  5. Kernel release version and kernel version: +6.1.8-arch1-1 #1 SMP PREEMPT_DYNAMIC Tue, 24 Jan 2023 21:07:04 +0000
  6. +
  7. Hardware platform name: +x86_64
  8. +
  9. Operating system name: +GNU/Linux
  10. +
+
+
+
+
+
+

Author: Amirlan Sharipov (BS21-CS-01)

+

Created: 2023-02-02 Thu 16:34

+
+ + \ No newline at end of file diff --git a/week1/lab1-solution.org b/week1/lab1-solution.org new file mode 100644 index 0000000..6846603 --- /dev/null +++ b/week1/lab1-solution.org @@ -0,0 +1,187 @@ +#+title: Lab 1 +#+title: Amirlan Sharipov (BS21-CS-01) +#+author: Amirlan Sharipov (BS21-CS-01) +#+PROPERTY: header-args :results verbatim :exports both +#+OPTIONS: ^:nil + +* About this document +I use Org mode in Emacs to write documents and will use it throughout the course. +It has a super useful feature: it can evaluate the code on the fly and save results in the buffer. +Also it's easy to use latex inside Org mode. That's why I will use Org mode for this course. +If there are any problems with that, please report in the comments in Moodle. + +* Exercise 1 +Code: +#+begin_src bash +lsb_release -a +#+end_src + +#+RESULTS: +: -e LSB Version: n/a +: -e Distributor ID: Arch +: -e Description: Arch Linux +: -e Release: rolling +: -e Codename: n/a + +Code: +#+begin_src bash +whoami +#+end_src + +#+RESULTS: +: rinri + +Code: +#+begin_src bash +users +#+end_src + +#+RESULTS: +: rinri + +Code: +#+begin_src bash +pwd +#+end_src + +#+RESULTS: +: /home/rinri/edu/sna + +Code: +#+begin_src bash +ls -la +#+end_src + +#+RESULTS: +: total 224 +: drwxr-xr-x 2 rinri users 4096 Feb 2 15:47 . +: drwxr-xr-x 25 rinri users 4096 Feb 2 14:13 .. +: -rw-r--r-- 1 rinri users 19950 Feb 2 15:50 lab1.html +: -rw-r--r-- 1 rinri users 6407 Feb 2 15:51 lab1.org +: -rw-r--r-- 1 rinri users 190030 Feb 2 15:45 lab1.pdf + +Code: +#+begin_src bash +cd ~/library +ls -la +#+end_src + +#+RESULTS: +#+begin_example +total 17260 +drwxr-xr-x 2 rinri users 4096 Jan 8 10:47 . +drwx--x---+ 74 rinri users 4096 Feb 2 16:07 .. +-rw-r--r-- 1 rinri users 6556637 Jan 8 10:47 Andrew S. Tanenbaum - Modern Operating Systems.pdf +lrwxrwxrwx 1 rinri users 38 Aug 3 2022 cormen-algos.pdf -> /home/rinri/data/docs/cormen-algos.pdf +lrwxrwxrwx 1 rinri users 93 Aug 3 2022 genki -> /home/rinri/data/docs/Banno E., Ikeda Y., Ohno Y., Shinagawa Ch., Tokashiki K. - Genki - 2020 +-rw-r--r-- 1 rinri users 213363 Jan 8 10:47 ipfs-p2p-file-system.pdf +-rwxr-xr-x 1 rinri users 66 Aug 3 2022 library.sh +-rw-r--r-- 1 rinri users 2658531 Jan 8 10:47 Stroustrup B. - A Tour of C++ - Second Edition - 2018.pdf +-rw-r--r-- 1 rinri users 8220353 Jan 8 10:47 TRENCH_FREE_DIFFEQ_I.PDF +#+end_example + +Code: +#+begin_src bash +cat /etc/shells +#+end_src + +#+RESULTS: +: # Pathnames of valid login shells. +: # See shells(5) for details. +: +: /bin/sh +: /bin/bash +: /bin/zsh +: /usr/bin/zsh +: /usr/bin/git-shell +: /bin/dash + +Code: +#+begin_src bash +echo "$SHELL" +#+end_src + +#+RESULTS: +: /bin/zsh + +* Questions +** Using hostname command: +Code: +#+begin_src bash +hostname +#+end_src + +#+RESULTS: +: akemi + +** Arch Linux. +It's a rolling-release distribution. That's why there is no "version". I've been using it for several years. Used lsb_release -a command to check the info. + +** The root directory is "/" + +** /bin/bash vs /bin/sh +/bin/bash is a path to the bash shell. Whereas /bin/sh, on most of the systems, is a symbolic link to a POSIX-compliant shell. In many cases, it's linked to bash. On my machine, it's dash (it's usually faster if the script is POSIX-compliant) + +Code: +#+begin_src bash +ls -l /bin/sh +#+end_src + +#+RESULTS: +: lrwxrwxrwx 1 root root 4 Jul 3 2022 /bin/sh -> dash + +** Bash manual +a. --verbose - When verbose mode is used, bash doesn't hide extra information (prints it), including all the steps done of a script, bashrc commands, and other info. +b. --help - shows help message +c. --rcfile file - use "file" as a initialization file instead of ~/.bashrc +** Linux distributions I want to try +*** NixOS +NixOS uses a unique package manager called nix that solves many problems of common package managers (e.g. apt), including dependency hell. +Apart from the package manager, NixOS has single configuration file for the entire system. +Moreover, NixOS saves different "states" of the OS, and a user can rollback, for example, to the yesterday's state of the OS if something breaks. +*** Gentoo Linux +Gentoo Linux also uses a unique package manager called Portage. +To install software on Gentoo, Portage builds most of the packages from source and allows user to optimize the software for their own needs. +Gentoo also allows to use OpenRC as an init system instead of systemd. Even though OpenRC doesn't have many features of systemd, it's significantly lighter and simpler than systemd. +*** Artix Linux +Artix Linux is essentially Arch Linux, but it gives several options for the init system, including OpenRC, runit, s6, and others. +*** LFS +LFS allows a user to build their own Linux distribution. I think it allows a user to learn many things about Linux. +*** Alpine Linux +Alpine Linux is a lightweight Linux distribution, since it uses musl libc instead of glibc and busybox instead of GNU coreutils. It's widely used in Docker Images, thus it's useful to learn Alpine Linux. +** POSIX +POSIX is a family of standards created to maintain compatibility between operating systems. +For example, POSIX-compliant shell I mentioned earlier, is a shell that does things as mentioned in the POSIX standard for shells. +If a script is POSIX-compliant, any POSIX-compliant shell can run it without issues and the script usually starts with #!/bin/sh +Some of the information is taken from Wikipedia. +** Advantages of POSIX standards +If a program is written with POSIX in mind, then it should work on other POSIX OSes too. Thus the portability increases. +Since POSIX standards are public, everyone can create programs that can work and communicate with other POSIX-compliant programs. +** Slackware vs Debian +Both distributions are old (29 years old), but both are still maintained. + +Slackware tries to be stable and simple, thus makes as few changes to the software as possible. It uses pkgtool package management system. +There are not that many packages available in Slackware, but users can use third-party repositories to install software or update the system. +Slackware has a small team of developers, whereas Debian is an popular distribution, that has many maintainers. + +Debian stable is widely used on servers, thanks to its stability and long-term support, while Debian unstable (rolling-release) and testing are used on PCs. +Debian uses apt (with dpkg) package management system. There are many Linux distributions based on Debian, e.g. Ubuntu, MX linux, etc. +** uname -a +Code: +#+begin_src bash +uname -a +#+end_src + +#+RESULTS: +: Linux akemi 6.1.8-arch1-1 #1 SMP PREEMPT_DYNAMIC Tue, 24 Jan 2023 21:07:04 +0000 x86_64 GNU/Linux + +1. Kernel name: + Linux +2. Hostname: + akemi +3. Kernel release version and kernel version: + 6.1.8-arch1-1 #1 SMP PREEMPT_DYNAMIC Tue, 24 Jan 2023 21:07:04 +0000 +4. Hardware platform name: + x86_64 +5. Operating system name: + GNU/Linux diff --git a/week1/lab1.html b/week1/lab1.html new file mode 100644 index 0000000..e3c61e4 --- /dev/null +++ b/week1/lab1.html @@ -0,0 +1,282 @@ + + + + + + + + + + + + + Lab 1: Introduction to Linux - HackMD + + + + + + + + + + + + + + + + + +

Lab 1: Introduction to Linux


Environment Preparation

    +
  1. Download and Install Ubuntu 22.04 LTS on your workstation as a virtual machine. It is recommended to use Virtualbox but you can use alternate solutions of your choice. You can use your host operating system if you have Ubuntu installed.
  2. +
  3. Keep the instance of virtual machine for future labs
  4. +

Grading requirements for all labs


Exercise 1 - Finding your way around Linux

+

A shell is a program that provides an interface for the user to interact with the operating system. It gathers input (commands) from the user, executes them, and return the output when necessary. The terminal where you type your command is a shell.

+

Questions to answer

    +
  1. What is your machine hostname? How did you check it?
  2. +
  3. What distribution of Linux did you install, and what is the version?
  4. +
  5. What is the root directory on your machine?
  6. +
  7. What is the difference between /bin/bash and /bin/sh?
  8. +
  9. Read the manual for bash. List three options and describe what they do. +
    +

    Hint: RTFM

    +
    +
  10. +
  11. Write five (5) Linux distributions you want to try. Write short notes on their purposes.
  12. +
  13. What is the POSIX standard?
  14. +
  15. What are the advantages of the POSIX standard?
  16. +
  17. Write the differences between Slackware and Debian.
  18. +
  19. Explain all the details of the output from the command uname -a.
  20. +
+ + + + + + + + + diff --git a/week10/lab10-solution.html b/week10/lab10-solution.html new file mode 100644 index 0000000..84492a6 --- /dev/null +++ b/week10/lab10-solution.html @@ -0,0 +1,334 @@ + + + + + + + +Lab9 Solution Amirlan Sharipov (BS21-CS-01) + + + + + + + + +
+

Lab9 Solution Amirlan Sharipov (BS21-CS-01)

+ + +
+

1. Question 1

+
+

+I would use rsyslog and journald. Forward them to ELK stack and use it as a SIEM. +There are many people people familiar with the ELK stack, and it’s easy to scale it. +

+
+
+ +
+

2. Question 2

+
+

+> sudo cat /etc/rsyslog.d/auth-errors.conf +auth.alert,authpriv.alert /var/log/auth-errors +

+ +

+rsyslogpriority + +

+
+
+ +
+

3. Question 3

+
+
+
cat /etc/logrotate.d/httpd
+
+
+ +
+/var/log/httpd/*log {
+   rotate 10
+   compress
+   missingok
+   sharedscripts
+   postrotate
+      /usr/bin/systemctl reload httpd.service 2>/dev/null || true
+   endscript
+}
+
+ + +

+’* */6 * * * logrotate /etc/logrotate.d/httpd +

+ +

+logrotate + +

+
+
+ +
+

4. Question 4

+
+ + +
+

5. Question 5

+
+

+in /etc/bashrc: +

+ +

+export PROMPT_COMMAND=’RETRN_VAL=\(?;logger -p local6.debug "\)(whoami) [$$]: $(history 1 | sed “s/^[ ]*[0-9]\+[ ]*//” )“’ +

+ +

+in /etc/rsyslog.d/bash.conf +local6.* /var/log/commands.log +

+ +

+Taken from https://unix.stackexchange.com/questions/664581/how-do-i-log-all-commands-executed-by-all-users +

+
+
+
+
+

Author: Amirlan Sharipov (BS21-CS-01)

+

Created: 2023-04-14 Fri 00:00

+
+ + \ No newline at end of file diff --git a/week10/lab10-solution.org b/week10/lab10-solution.org new file mode 100644 index 0000000..c7a5176 --- /dev/null +++ b/week10/lab10-solution.org @@ -0,0 +1,50 @@ +#+title: Lab9 Solution +#+title: Amirlan Sharipov (BS21-CS-01) +#+author: Amirlan Sharipov (BS21-CS-01) +#+PROPERTY: header-args :results verbatim :exports both +#+OPTIONS: ^:nil + +* Question 1 +I would use rsyslog and journald. Forward them to ELK stack and use it as a SIEM. +There are many people people familiar with the ELK stack, and it's easy to scale it. + +* Question 2 +> sudo cat /etc/rsyslog.d/auth-errors.conf +auth.alert,authpriv.alert /var/log/auth-errors + +[[./rsyslogpriority.jpg][rsyslogpriority +]] + +* Question 3 +#+begin_src bash +cat /etc/logrotate.d/httpd +#+end_src + +#+RESULTS: +: /var/log/httpd/*log { +: rotate 10 +: compress +: missingok +: sharedscripts +: postrotate +: /usr/bin/systemctl reload httpd.service 2>/dev/null || true +: endscript +: } + +'* */6 * * * logrotate /etc/logrotate.d/httpd + +[[./logrotate.jpg][logrotate +]] + +* Question 4 + + +* Question 5 +in /etc/bashrc: + +export PROMPT_COMMAND='RETRN_VAL=$?;logger -p local6.debug "$(whoami) [$$]: $(history 1 | sed "s/^[ ]*[0-9]\+[ ]*//" )"' + +in /etc/rsyslog.d/bash.conf +local6.* /var/log/commands.log + +Taken from https://unix.stackexchange.com/questions/664581/how-do-i-log-all-commands-executed-by-all-users diff --git a/week10/lab10.html b/week10/lab10.html new file mode 100644 index 0000000..1bcb559 --- /dev/null +++ b/week10/lab10.html @@ -0,0 +1,472 @@ + + + + + + + + + + + + + Lab 10: Logging and auditing - HackMD + + + + + + + + + + + + + + + + + +

Lab 10: Logging and auditing

Task 1: journald

Task 2: rsyslog

Rsyslog rules typically have the facility and the level. These two combined in the form facility.level define the priority of the log message.

+

More about facility and level: https://success.trendmicro.com/dcx/s/solution/TP000086250

+

Logging rsyslog to a remote server

Task 3: Logrotate

We have a log file /var/log/lab10.log that we need to rotate periodically.

+

You can test and troubleshoot your logrotate configuration by running $ logrotate -s logstatus /etc/lab10-rotate.d/lab10log.
+If no message is displayed on the terminal after running the command, it means that the configuration is good.
+You can look in the target log file directory to verify that the log has been rotated.

+

Task 4: User authentication activities

Task 5: User sessions and sudo usage

Questions to answer

    +
  1. What security monitoring tool will you forward your system logs to for security event detection? Give reasons for your choice.
  2. +
  3. Configure rsyslogd by adding a rule to the newly created configuration file /etc/rsyslog.d/auth-errors.conf to log all security and authentication messages with the priority alert and higher to the /var/log/auth-errors file. Test the newly added log directive with the logger command. Verify it from rsyslog and journald perspectives by filtering the output.
  4. +
  5. Install Apache web server and configure log rotate to rotate its web access log every six hours. Compress the rotated log files, and ensure that log rotate restarts the web server after rotating the logs. Manually execute the logrotate utility to test your configuration and show results.
  6. +
  7. Create a bash script that continuously monitors the /var/log/auth.log file and triggers an alarm if there are three or more “authentication failure” in 30 seconds. The text Three or more authentication failure in 30 seconds should be appended to a log file /var/log/alarm.log everytime the alarm is triggered. Show test use case and results.
  8. +
  9. How can you log all commands executed by every user on Linux systems. What utility will you use for this. Show how you configure this tool, and show the logs generated.
  10. +

Bonus

    +
  1. Set up a centralized journald logging server systemd-journal-remote. Configure another machine as a client to forward its journal to the logging server. +
      +
    • Test your setup by running the logger utility on the client system and show the logs generated on the logging server.
    • +
    +
  2. +
+ + + + + + + + + diff --git a/week10/logrotate.jpg b/week10/logrotate.jpg new file mode 100644 index 0000000..e0de5c0 Binary files /dev/null and b/week10/logrotate.jpg differ diff --git a/week10/rsyslogpriority.jpg b/week10/rsyslogpriority.jpg new file mode 100644 index 0000000..4611f74 Binary files /dev/null and b/week10/rsyslogpriority.jpg differ diff --git a/week11/container-cp.jpg b/week11/container-cp.jpg new file mode 100644 index 0000000..2bdd3d4 Binary files /dev/null and b/week11/container-cp.jpg differ diff --git a/week11/container-ls-1.jpg b/week11/container-ls-1.jpg new file mode 100644 index 0000000..01f33c5 Binary files /dev/null and b/week11/container-ls-1.jpg differ diff --git a/week11/container-ls-2.jpg b/week11/container-ls-2.jpg new file mode 100644 index 0000000..b48b1c2 Binary files /dev/null and b/week11/container-ls-2.jpg differ diff --git a/week11/index.html b/week11/index.html new file mode 100644 index 0000000..e427984 --- /dev/null +++ b/week11/index.html @@ -0,0 +1 @@ +HELLO diff --git a/week11/lab11-solution.html b/week11/lab11-solution.html new file mode 100644 index 0000000..4c617a5 --- /dev/null +++ b/week11/lab11-solution.html @@ -0,0 +1,407 @@ + + + + + + + +Lab 11 Solution Amirlan Sharipov (BS21-CS-01) + + + + + +
+

Lab 11 Solution Amirlan Sharipov (BS21-CS-01)

+ + +
+

1. Question 1

+
+

+Source: https://stackoverflow.com/questions/21553353/what-is-the-difference-between-cmd-and-entrypoint-in-a-dockerfile +Usually, the entrypoint is /bin/sh -c CMD. So this command gets executed when the container is run. +It’s a standard practice to customize CMD, though. If you want to use other shell for executing commands, it may be useful to customize the entrypoint. +

+
+
+ +
+

2. Question 2

+ +
+

2.1. Choose a host OS that provides maximum container isolation. (hardened host OS)

+
+
+

2.2. Use network namespaces

+
+
+

2.3. Use kubernetes to manage access right

+
+
+

2.4. Monitor the logs using SIEM tools

+
+
+

2.5. Don’t use outdated images

+
+
+ +
+

3. Question 3

+
+

+container-ls-1.jpg +container-ls-2.jpg +

+
+
+
+

4. Question 4

+
+

+Source: https://docs.docker.com/engine/reference/commandline/cp/ +docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|- +

+ +

+Example: +

+
+
cat ~/nginx.sh
+
+
+ +
+#!/bin/bash
+
+docker run \
+    -v /etc/ssl/certs/monica.crt:/etc/ssl/certs/monica.crt \
+    -v /etc/ssl/private/monica.key:/etc/ssl/private/monica.key \
+    -v /home/rinri/.config/nginx:/etc/nginx/conf.d \
+    -v /home/rinri/edu/sna/:/var/www \
+    -p 80:80 -p 443:443 -p 5000:5000 \
+    --restart unless-stopped \
+    -d nginx
+
+
+ +

+After running nginx.sh: +container-cp.jpg +

+
+
+ +
+

5. Question 5

+
+
+
echo "Run Nginx container:"
+cat ~/nginx.sh
+echo "Config file:"
+cat ~/.config/nginx/test.conf
+
+
+ +
+Run Nginx container:
+#!/bin/bash
+
+docker run \
+    -v /etc/ssl/certs/monica.crt:/etc/ssl/certs/monica.crt \
+    -v /etc/ssl/private/monica.key:/etc/ssl/private/monica.key \
+    -v /home/rinri/.config/nginx:/etc/nginx/conf.d \
+    -v /home/rinri/edu/sna/:/var/www \
+    -p 80:80 -p 443:443 -p 5000:5000 \
+    --restart unless-stopped \
+    -d nginx
+
+Config file:
+server {
+    listen 5000;
+    listen [::]:5000;
+    root /var/www;
+    index index.html index.htm;
+
+    location / {
+        try_files $uri $uri/ =404;
+    }
+}
+
+server {
+    listen 80;
+    listen [::]:80;
+
+    server_name monica.local;
+
+    return 302 https://$server_name$request_uri;
+}
+
+server {
+    listen 443;
+    listen [::]:443;
+
+    include conf.d/snippets/self-signed.conf;
+
+    server_name monica.local;
+
+    location / {
+        proxy_pass http://172.17.0.4;
+        proxy_set_header Host monica.local;
+    }
+}
+
+
+
+ +
+

6. Question 6

+
+

+In /etc/rsyslog.conf: +$ModLoad imtcp.so +$InputTCPServerRun 514 +

+ +

+Command: +docker run -it –log-driver syslog –log-opt syslog-address=tcp://172.17.0.1:514 alpine ash +

+
+
+ +
+

7. Question 8

+
+

+FROM alpine +RUN apk add –update –no-cache python3 && ln -sf python3 /usr/bin/python +RUN python3 -m ensurepip +RUN pip3 install –no-cache –upgrade pip setuptools +RUN touch index.html +RUN echo “<html><h1>Testing web</h1></html>” >> index.html +CMD [“python”, “-m”, “http.server”] +

+ +

+changed apt to apk. +source: https://stackoverflow.com/questions/62554991/how-do-i-install-python-on-alpine-linux +

+
+
+
+
+

Author: Amirlan Sharipov (BS21-CS-01)

+

Created: 2023-04-20 Thu 22:23

+
+ + \ No newline at end of file diff --git a/week11/lab11-solution.org b/week11/lab11-solution.org new file mode 100644 index 0000000..58e71ef --- /dev/null +++ b/week11/lab11-solution.org @@ -0,0 +1,125 @@ +#+title: Lab 11 Solution +#+title: Amirlan Sharipov (BS21-CS-01) +#+author: Amirlan Sharipov (BS21-CS-01) +#+PROPERTY: header-args :results verbatim :exports both +#+OPTIONS: ^:nil + +* Question 1 +Source: https://stackoverflow.com/questions/21553353/what-is-the-difference-between-cmd-and-entrypoint-in-a-dockerfile +Usually, the entrypoint is /bin/sh -c CMD. So this command gets executed when the container is run. +It's a standard practice to customize CMD, though. If you want to use other shell for executing commands, it may be useful to customize the entrypoint. + +* Question 2 +Source: https://www.redhat.com/en/topics/security/container-security +** Choose a host OS that provides maximum container isolation. (hardened host OS) +** Use network namespaces +** Use kubernetes to manage access right +** Monitor the logs using SIEM tools +** Don't use outdated images + +* Question 3 +[[./container-ls-1.jpg]] +[[./container-ls-2.jpg]] +* Question 4 +Source: https://docs.docker.com/engine/reference/commandline/cp/ +docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|- + +Example: +#+begin_src bash +cat ~/nginx.sh +#+end_src + +#+RESULTS: +#+begin_example +#!/bin/bash + +docker run \ + -v /etc/ssl/certs/monica.crt:/etc/ssl/certs/monica.crt \ + -v /etc/ssl/private/monica.key:/etc/ssl/private/monica.key \ + -v /home/rinri/.config/nginx:/etc/nginx/conf.d \ + -p 80:80 -p 443:443 \ + --restart unless-stopped \ + -d nginx + +#+end_example + +After running nginx.sh: +[[./container-cp.jpg]] + +* Question 5 +#+begin_src bash +echo "Run Nginx container:" +cat ~/nginx.sh +echo "Config file:" +cat ~/.config/nginx/test.conf +#+end_src + +#+RESULTS: +#+begin_example +Run Nginx container: +#!/bin/bash + +docker run \ + -v /etc/ssl/certs/monica.crt:/etc/ssl/certs/monica.crt \ + -v /etc/ssl/private/monica.key:/etc/ssl/private/monica.key \ + -v /home/rinri/.config/nginx:/etc/nginx/conf.d \ + -v /home/rinri/edu/sna/:/var/www \ + -p 80:80 -p 443:443 -p 5000:5000 \ + --restart unless-stopped \ + -d nginx + +Config file: +server { + listen 5000; + listen [::]:5000; + root /var/www; + index index.html index.htm; + + location / { + try_files $uri $uri/ =404; + } +} + +server { + listen 80; + listen [::]:80; + + server_name monica.local; + + return 302 https://$server_name$request_uri; +} + +server { + listen 443; + listen [::]:443; + + include conf.d/snippets/self-signed.conf; + + server_name monica.local; + + location / { + proxy_pass http://172.17.0.4; + proxy_set_header Host monica.local; + } +} +#+end_example + +* Question 6 +In /etc/rsyslog.conf: +$ModLoad imtcp.so +$InputTCPServerRun 514 + +Command: +docker run -it --log-driver syslog --log-opt syslog-address=tcp://172.17.0.1:514 alpine ash + +* Question 8 +FROM alpine +RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python +RUN python3 -m ensurepip +RUN pip3 install --no-cache --upgrade pip setuptools +RUN touch index.html +RUN echo "

Testing web

" >> index.html +CMD ["python", "-m", "http.server"] + +changed apt to apk. +source: https://stackoverflow.com/questions/62554991/how-do-i-install-python-on-alpine-linux diff --git a/week11/lab11.html b/week11/lab11.html new file mode 100644 index 0000000..c5a42ec --- /dev/null +++ b/week11/lab11.html @@ -0,0 +1,556 @@ + + + + + + + + + + + + + Lab 11: Docker - HackMD + + + + + + + + + + + + + + + + + +

Lab 11: Docker

Task 1: Install Docker on Ubuntu

It is recommended that you do not run docker as the root user. If you currently cannot run $ docker with your non-root account, then take the following steps:

+

If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using:

+
$ sudo usermod -aG docker <username>
+
+

Task 2: Pull images and run containers

Task 3: Create a custom Docker image

Let’s create a static page website and run it on a Python web server.

+

Note: There are more effective ways to set up a web server. We use the methods in this lab simply to explore the process of creating a Docker image.

+

Task 4: Docker inspect and container logs

Docker inspect is used to view low-level information on Docker objects.

Container logging helps developers keep track of patterns, troubleshoot issues, and fix bugs.

Questions to answer

    +
  1. Compare and contrast ENTRYPOINT and CMD in Dockerfile. In what situation would you use each of them?
  2. +
  3. List five security precautions you will take when building or deploying a Docker resource (image or container).
  4. +
  5. Show a single line command that will remove all exited Docker containers. Do not use any text filtering editor. Show test results.
  6. +
  7. Show how you can copy files to a running container without entering the container’s interactive shell.
  8. +
  9. Create a dockerized web application running on nginx. The web index page index.html should be located on your host machine. The directory containing the index page should be mounted to the container and served from there. +
    +

    This means that you should be able to modify the web index page on your host machine without interacting with the container.
    +Show all steps taken for the configuration including the test results.

    +
    +
  10. +
  11. Setup rsyslog on your host machine as a central logging server. Create a Docker container and configure it to forward its log to your central logging server. +
    +

    Show steps and test results.

    +
    +
  12. +

Bonus

    +
  1. Dockerize any open source application of your choice, and host it on Docker hub. Share link to the repository.
  2. +
  3. Find and fix the problems in the following Dockerfile. There are some issues building the image and also running the container:
    + + + +
    FROM alpine +RUN apt-get update && apt-get install -y python3 --no-install-recommends +RUN touch index.html +RUN echo "<html><h1>Testing web</h1></html>" >> index.html +CMD ["python", "-m", "http.server"] +
    +
    +

    Show all steps taken to fix it, and a working solution.

    +
    +
  4. +
+ + + + + + + + + diff --git a/week2/lab2-solution.html b/week2/lab2-solution.html new file mode 100644 index 0000000..185ca81 --- /dev/null +++ b/week2/lab2-solution.html @@ -0,0 +1,468 @@ + + + + + + + +Lab2 Solution Amirlan Sharipov (BS21-CS-01) + + + + + +
+

Lab2 Solution Amirlan Sharipov (BS21-CS-01)

+ + +
+

1. Questions 1

+
+
+
+

1.1. What is fdisk utility used for?

+
+

+to manipulate disk partition table +

+
+
+
+

1.2. Show the bootable device(s) on your machine, and identify which partition(s) are bootable.

+
+
+
+

1.2.1. Output of fdisk -l:

+
+

+… +Device Start End Sectors Size Type +/dev/sdb1 2048 34815 32768 16M Microsoft reserved +/dev/sdb2 34816 524285951 524251136 250G Microsoft basic data +/dev/sdb3 524285952 659988479 135702528 64.7G Linux filesystem +/dev/sdb4 659988480 863920127 203931648 97.2G Linux filesystem +/dev/sdb5 958291968 975173631 16881664 8G Linux swap +/dev/sdb6 975173632 976773134 1599503 781M EFI System +/dev/sdb7 863920128 956194815 92274688 44G Linux filesystem +/dev/sdb8 956194816 958291967 2097152 1G EFI System +

+
+
+
+

1.2.2. Answer

+
+

+/dev/sdb6 and /dev/sdb8 are bootable partitions +

+
+
+
+ +
+

1.3. What is logical block address?

+
+

+is a scheme to index the locations of logical blocks of a device. Starts with LBA 0 +

+
+
+
+

1.4. Why did we specify the count, the bs, and the skip options when using dd?

+
+

+Number of blocks, block size, and how many blocks to skip +

+
+
+
+

1.5. Why does a GPT formatted disk have the MBR?

+
+

+To maintain compatibility and protect GPT disk and from MBR-based disk utilities. +

+
+
+
+

1.6. Name two differences between primary and logical partitions in an MBR partitioning scheme

+
+

+There can be only 4 primary partitions in MBR disk, while there can be many logical ones on top of an extended partition. Some operating systems cannot boot from a logical partition. +

+
+
+
+ + +
+

2. Questions 2

+
+
+
+

2.1. Why is Shim used to load the GRUB bootloader?

+
+

+To make Secure Boot mechanism work. +

+
+
+
+

2.2. Can you locate your grub configuration file? Show the path.

+
+

+/boot/grub/grub.cfg +Also, there is /etc/default/grub which can be used to generate a grub config using grub-mkconfig +

+
+
+
+

2.3. According to the boot order, what is the third boot device on your computer? How did you check this?

+
+

+BootCurrent: 0003 +Timeout: 0 seconds +BootOrder: 0003,0004,0009,0001,0002,0000,2001,0006,0005,2002,2003 +… +Boot0009* Artix HD(10,GPT,… +It’s Artix on my hard drive, accoding to efibootmgr -v +

+
+
+
+ +
+

3. Questions 3

+
+
+
+

3.1. How many inodes are in use on your system?

+
+
+
df --output=source,iused
+
+
+ +
+Filesystem      IUsed
+dev               717
+run              1314
+/dev/sdb3      720461
+tmpfs               1
+tmpfs              26
+/dev/sdb4      677813
+/dev/sdb6           0
+tmpfs              71
+/dev/sda9       16193
+/dev/sda8       33171
+
+
+
+ +
+

3.2. What is the filesystem type of the EFI partition?

+
+

+FAT32 +

+
+
+
+

3.3. What device is mounted at your root / directory? Show proof.

+
+
+
lsblk
+
+
+ +
+NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
+sda      8:0    0 931.5G  0 disk 
+├─sda1   8:1    0   499M  0 part 
+├─sda2   8:2    0   100M  0 part 
+├─sda3   8:3    0    16M  0 part 
+├─sda4   8:4    0  42.4G  0 part 
+├─sda5   8:5    0  38.8G  0 part 
+├─sda6   8:6    0   100M  0 part 
+├─sda7   8:7    0   3.1G  0 part 
+├─sda8   8:8    0   315G  0 part /mnt/rec
+└─sda9   8:9    0 531.5G  0 part /mnt/data
+sdb      8:16   0 465.8G  0 disk 
+├─sdb1   8:17   0    16M  0 part 
+├─sdb2   8:18   0   250G  0 part 
+├─sdb3   8:19   0  64.7G  0 part /
+├─sdb4   8:20   0  97.2G  0 part /home
+├─sdb5   8:21   0     8G  0 part [SWAP]
+├─sdb6   8:22   0   781M  0 part /boot
+├─sdb7   8:23   0    44G  0 part 
+└─sdb8   8:24   0     1G  0 part 
+
+

+/dev/sdb3 +

+
+
+ +
+

3.4. What is your partition UUID?

+
+

+For PARTUUID: +

+
+
lsblk -dno PARTUUID /dev/sdb3
+
+
+ +
+fbef9613-fbf5-8445-8d1c-7a63709d1229
+
+
+
+ +
+

3.5. Show at least two methods of viewing the UUID of a block device.

+
+

+lsblk -dno UUID /dev/sdb3 +blkid +

+
+
+ +
+

3.6. What is the function of /dev/zero?

+
+

+Source of zero bytes. Can be used with dd to fill a file with zeros. +

+
+
+
+
+
+

Author: Amirlan Sharipov (BS21-CS-01)

+

Created: 2023-02-09 Thu 23:36

+
+ + \ No newline at end of file diff --git a/week2/lab2-solution.org b/week2/lab2-solution.org new file mode 100644 index 0000000..abfde93 --- /dev/null +++ b/week2/lab2-solution.org @@ -0,0 +1,108 @@ +#+title: Lab2 Solution +#+title: Amirlan Sharipov (BS21-CS-01) +#+author: Amirlan Sharipov (BS21-CS-01) +#+PROPERTY: header-args :results verbatim :exports both +#+OPTIONS: ^:nil + +* Questions 1 +** What is fdisk utility used for? +to manipulate disk partition table +** Show the bootable device(s) on your machine, and identify which partition(s) are bootable. +*** Output of fdisk -l: +... +Device Start End Sectors Size Type +/dev/sdb1 2048 34815 32768 16M Microsoft reserved +/dev/sdb2 34816 524285951 524251136 250G Microsoft basic data +/dev/sdb3 524285952 659988479 135702528 64.7G Linux filesystem +/dev/sdb4 659988480 863920127 203931648 97.2G Linux filesystem +/dev/sdb5 958291968 975173631 16881664 8G Linux swap +/dev/sdb6 975173632 976773134 1599503 781M EFI System +/dev/sdb7 863920128 956194815 92274688 44G Linux filesystem +/dev/sdb8 956194816 958291967 2097152 1G EFI System +*** Answer +/dev/sdb6 and /dev/sdb8 are bootable partitions + +** What is logical block address? +is a scheme to index the locations of logical blocks of a device. Starts with LBA 0 +** Why did we specify the count, the bs, and the skip options when using dd? +Number of blocks, block size, and how many blocks to skip +** Why does a GPT formatted disk have the MBR? +To maintain compatibility and protect GPT disk and from MBR-based disk utilities. +** Name two differences between primary and logical partitions in an MBR partitioning scheme +There can be only 4 primary partitions in MBR disk, while there can be many logical ones on top of an extended partition. Some operating systems cannot boot from a logical partition. + + +* Questions 2 +** Why is Shim used to load the GRUB bootloader? +To make Secure Boot mechanism work. +** Can you locate your grub configuration file? Show the path. +/boot/grub/grub.cfg +Also, there is /etc/default/grub which can be used to generate a grub config using grub-mkconfig +** According to the boot order, what is the third boot device on your computer? How did you check this? +BootCurrent: 0003 +Timeout: 0 seconds +BootOrder: 0003,0004,0009,0001,0002,0000,2001,0006,0005,2002,2003 +... +Boot0009* Artix HD(10,GPT,... +It's Artix on my hard drive, accoding to efibootmgr -v + +* Questions 3 +** How many inodes are in use on your system? +#+begin_src bash +df --output=source,iused +#+end_src + +#+RESULTS: +#+begin_example +Filesystem IUsed +dev 717 +run 1314 +/dev/sdb3 720461 +tmpfs 1 +tmpfs 22 +/dev/sdb4 677792 +/dev/sdb6 0 +tmpfs 71 +/dev/sda9 16193 +/dev/sda8 33171 +#+end_example + +** What is the filesystem type of the EFI partition? +FAT32 +** What device is mounted at your root / directory? Show proof. +#+begin_src bash +lsblk +#+end_src + +#+RESULTS: +#+begin_example +NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS +sda 8:0 0 931.5G 0 disk +... +sdb 8:16 0 465.8G 0 disk +├─sdb1 8:17 0 16M 0 part +├─sdb2 8:18 0 250G 0 part +├─sdb3 8:19 0 64.7G 0 part / +├─sdb4 8:20 0 97.2G 0 part /home +├─sdb5 8:21 0 8G 0 part [SWAP] +├─sdb6 8:22 0 781M 0 part /boot +├─sdb7 8:23 0 44G 0 part +└─sdb8 8:24 0 1G 0 part +#+end_example +/dev/sdb3 + +** What is your partition UUID? +For PARTUUID: +#+begin_src bash +lsblk -dno PARTUUID /dev/sdb3 +#+end_src + +#+RESULTS: +: fbef9613-fbf5-8445-8d1c-7a63709d1229 + +** Show at least two methods of viewing the UUID of a block device. +lsblk -dno UUID /dev/sdb3 +blkid + +** What is the function of /dev/zero? +Source of zero bytes. Can be used with dd to fill a file with zeros. diff --git a/week2/lab2.html b/week2/lab2.html new file mode 100644 index 0000000..49fb593 --- /dev/null +++ b/week2/lab2.html @@ -0,0 +1,392 @@ + + + + + + + + + + + + + Lab 2: OS main components - HackMD + + + + + + + + + + + + + + + + + +

Lab 2: OS main components

Environment Preparation

    +
  1. Ensure that you enabled EFI standard of booting. How did you check this?
  2. +

Exercise 1: GPT partition

MBR Dump and Analysis

GPT Header Dump and Analysis

Questions to answer

    +
  1. What is fdisk utility used for?
  2. +
  3. Show the bootable device(s) on your machine, and identify which partition(s) are bootable.
  4. +
  5. What is logical block address?
  6. +
  7. Why did we specify the count, the bs, and the skip options when using dd?
  8. +
  9. Why does a GPT formatted disk have the MBR?
  10. +
  11. Name two differences between primary and logical partitions in an MBR partitioning scheme
  12. +

Exercise 2 - UEFI Booting

The Unified Extensible Firmware Interface Specification describes an interface between the operating system and the platform firmware.

1. Boot sequence

Questions to answer

    +
  1. Why is Shim used to load the GRUB bootloader?
  2. +
  3. Can you locate your grub configuration file? Show the path.
  4. +
  5. According to the boot order, what is the third boot device on your computer? How did you check this?
  6. +

Exercise 3: Filesystem

Questions to answer

    +
  1. How many inodes are in use on your system?
  2. +
  3. What is the filesystem type of the EFI partition?
  4. +
  5. What device is mounted at your root / directory? Show proof.
  6. +
  7. What is your partition UUID?
  8. +
  9. Show at least two methods of viewing the UUID of a block device.
  10. +
  11. What is the function of /dev/zero?
  12. +
+ + + + + + + + + diff --git a/week3/lab3-solution.html b/week3/lab3-solution.html new file mode 100644 index 0000000..27358c8 --- /dev/null +++ b/week3/lab3-solution.html @@ -0,0 +1,318 @@ + + + + + + + +Lab3 Solution Amirlan Sharipov (BS21-CS-01) + + + + + +
+

Lab3 Solution Amirlan Sharipov (BS21-CS-01)

+ + +
+

1. Question 1

+
+

+Pipe takes stdout of one command and forwards it to another as stdin +

+
+
+
+

2. Question 2

+
+

+File formats and conventions +

+
+
+
+

3. Question 3

+
+
+
which ls
+
+
+ +
+/usr/bin/ls
+
+
+
+
+

4. Question 4

+
+

+mv test_file.tot test_file.txt +rename ’s/tot/txt/’ test-file.tot +

+
+
+
+

5. Question 5

+
+
+
echo -e "The location of hundreds of crab pots\nLittle Red Riding Hood\nThe location of hundreds of crab pots\nThe location of hundreds of crab pots\nThe sound of thunder\nEight hours in a row\nAll aboard\nEight hours in a row" | sort | uniq > newfile.txt; whoami >> newfile.txt
+
+
+
+
+ +
+

6. Question 6

+
+

+ping 127.0.0.1 &> /dev/null +

+
+
+
+

7. Question 7

+
+
+
sort | nl -ba > output.txt
+
+
+
+
+ +
+

8. Question 8

+
+

+cd home/rinri/testdir +cd ..../home/rinri/testdir +cd ~/testdir +cd; cd testdir +

+
+
+
+

9. Question 9

+
+
+
awk -F ':' '{print $7;}' /etc/passwd | sort | uniq
+
+
+ +
+/bin/bash
+/bin/false
+/bin/zsh
+/sbin/nologin
+/usr/bin/git-shell
+/usr/bin/nologin
+
+
+
+
+
+

Author: Amirlan Sharipov (BS21-CS-01)

+

Created: 2023-02-17 Fri 00:07

+
+ + \ No newline at end of file diff --git a/week3/lab3-solution.org b/week3/lab3-solution.org new file mode 100644 index 0000000..ab6835d --- /dev/null +++ b/week3/lab3-solution.org @@ -0,0 +1,50 @@ +#+title: Lab3 Solution +#+title: Amirlan Sharipov (BS21-CS-01) +#+author: Amirlan Sharipov (BS21-CS-01) +#+PROPERTY: header-args :results verbatim :exports both +#+OPTIONS: ^:nil + +* Question 1 +Pipe takes stdout of one command and forwards it to another as stdin +* Question 2 +File formats and conventions +* Question 3 +#+begin_src bash +which ls +#+end_src + +#+RESULTS: +: /usr/bin/ls +* Question 4 +mv test_file.tot test_file.txt +rename 's/tot/txt/' test-file.tot +* Question 5 +#+begin_src bash +echo -e "The location of hundreds of crab pots\nLittle Red Riding Hood\nThe location of hundreds of crab pots\nThe location of hundreds of crab pots\nThe sound of thunder\nEight hours in a row\nAll aboard\nEight hours in a row" | sort | uniq > newfile.txt; whoami >> newfile.txt +#+end_src +* Question 6 +ping 127.0.0.1 &> /dev/null +* Question 7 +#+begin_src bash +sort | nl -ba > output.txt +#+end_src + +#+RESULTS: + +* Question 8 +cd /home/rinri/testdir +cd ../../home/rinri/testdir +cd ~/testdir +cd; cd testdir +* Question 9 +#+begin_src bash +awk -F ':' '{print $7;}' /etc/passwd | sort | uniq +#+end_src + +#+RESULTS: +: /bin/bash +: /bin/false +: /bin/zsh +: /sbin/nologin +: /usr/bin/git-shell +: /usr/bin/nologin diff --git a/week4/lab4-solution.html b/week4/lab4-solution.html new file mode 100644 index 0000000..7d34ce9 --- /dev/null +++ b/week4/lab4-solution.html @@ -0,0 +1,305 @@ + + + + + + + +Lab4 Solution Amirlan Sharipov (BS21-CS-01) + + + + + +
+

Lab4 Solution Amirlan Sharipov (BS21-CS-01)

+ + +
+

1. Question 1

+
+
+
grep -E '(ERROR|WARNING)' server-data.log
+
+
+ +
+2022/09/18 13:25:34 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24'
+2022/09/18 13:25:35 wazuh-remoted: WARNING: Remote syslog not parsed from: '10.110.18.0/24'
+2022/09/18 13:25:35 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24'
+
+
+
+ +
+

2. Question 2

+
+
+
grep -v 'INFO' server-data.log
+
+
+ +
+2022/09/18 13:25:34 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24'
+2022/09/18 13:25:35 wazuh-remoted: WARNING: Remote syslog not parsed from: '10.110.18.0/24'
+2022/09/18 13:25:35 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24'
+
+
+
+ +
+

3. Question 3

+
+
+
grep -c 'ERROR' server-data.log
+
+
+ +
+2
+
+
+
+ +
+

4. Question 4

+
+
+
sed -E 's/([01]?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.([01]?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.([01]?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.([01]?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\/([0-2]?[0-9]|3[0-2])/xxx.xxx.xxx.xxx\/xx/g' server-data.log > newlog.log
+cat newlog.log
+
+
+ +
+2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx'
+2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.410.15.0/24'
+2022/09/18 13:25:34 wazuh-remoted: ERROR: Remote syslog blocked from: 'xxx.xxx.xxx.xxx/xx'
+2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx'
+2022/09/18 13:25:35 wazuh-remoted: WARNING: Remote syslog not parsed from: 'xxx.xxx.xxx.xxx/xx'
+2022/09/18 13:25:35 wazuh-remoted: ERROR: Remote syslog blocked from: 'xxx.xxx.xxx.xxx/xx'
+Log1 2022/09/18 13:25:35 wazuh-remoted: INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx'
+2022/09/18 13:25:35 wazuh-remoted: INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx' END
+2022/09/18 13:25:35 wazuh-remoted: ACTION: none INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx'
+
+
+
+ +
+

5. Question 5

+
+
+
grep -P "^2022\/09\/18 13:25:(34|35) wazuh-remoted: (INFO|ERROR|WARNING): Remote syslog (allowed|blocked|not parsed) from: '10\.110\.(15|18)\.0\/24'$" server-data.log
+
+
+ +
+2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24'
+2022/09/18 13:25:34 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24'
+2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24'
+2022/09/18 13:25:35 wazuh-remoted: WARNING: Remote syslog not parsed from: '10.110.18.0/24'
+2022/09/18 13:25:35 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24'
+
+
+
+
+
+

Author: Amirlan Sharipov (BS21-CS-01)

+

Created: 2023-02-22 Wed 15:55

+
+ + \ No newline at end of file diff --git a/week4/lab4-solution.org b/week4/lab4-solution.org new file mode 100644 index 0000000..852a526 --- /dev/null +++ b/week4/lab4-solution.org @@ -0,0 +1,62 @@ +#+title: Lab4 Solution +#+title: Amirlan Sharipov (BS21-CS-01) +#+author: Amirlan Sharipov (BS21-CS-01) +#+PROPERTY: header-args :results verbatim :exports both +#+OPTIONS: ^:nil + +* Question 1 +#+begin_src bash +grep -E '(ERROR|WARNING)' server-data.log +#+end_src + +#+RESULTS: +: 2022/09/18 13:25:34 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24' +: 2022/09/18 13:25:35 wazuh-remoted: WARNING: Remote syslog not parsed from: '10.110.18.0/24' +: 2022/09/18 13:25:35 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24' + +* Question 2 +#+begin_src bash +grep -v 'INFO' server-data.log +#+end_src + +#+RESULTS: +: 2022/09/18 13:25:34 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24' +: 2022/09/18 13:25:35 wazuh-remoted: WARNING: Remote syslog not parsed from: '10.110.18.0/24' +: 2022/09/18 13:25:35 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24' + +* Question 3 +#+begin_src bash +grep -c 'ERROR' server-data.log +#+end_src + +#+RESULTS: +: 2 + +* Question 4 +#+begin_src bash +sed -E 's/([01]?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.([01]?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.([01]?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.([01]?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\/([0-2]?[0-9]|3[0-2])/xxx.xxx.xxx.xxx\/xx/g' server-data.log > newlog.log +cat newlog.log +#+end_src + +#+RESULTS: +: 2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx' +: 2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.410.15.0/24' +: 2022/09/18 13:25:34 wazuh-remoted: ERROR: Remote syslog blocked from: 'xxx.xxx.xxx.xxx/xx' +: 2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx' +: 2022/09/18 13:25:35 wazuh-remoted: WARNING: Remote syslog not parsed from: 'xxx.xxx.xxx.xxx/xx' +: 2022/09/18 13:25:35 wazuh-remoted: ERROR: Remote syslog blocked from: 'xxx.xxx.xxx.xxx/xx' +: Log1 2022/09/18 13:25:35 wazuh-remoted: INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx' +: 2022/09/18 13:25:35 wazuh-remoted: INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx' END +: 2022/09/18 13:25:35 wazuh-remoted: ACTION: none INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx' + +* Question 5 +#+begin_src bash +grep -P "^2022\/09\/18 13:25:(34|35) wazuh-remoted: (INFO|ERROR|WARNING): Remote syslog (allowed|blocked|not parsed) from: '10\.110\.(15|18)\.0\/24'$" server-data.log +#+end_src + +#+RESULTS: +: 2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24' +: 2022/09/18 13:25:34 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24' +: 2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24' +: 2022/09/18 13:25:35 wazuh-remoted: WARNING: Remote syslog not parsed from: '10.110.18.0/24' +: 2022/09/18 13:25:35 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24' diff --git a/week4/lab4.html b/week4/lab4.html new file mode 100644 index 0000000..110e813 --- /dev/null +++ b/week4/lab4.html @@ -0,0 +1,403 @@ + + + + + + + + + + + + + Lab 4: Text filtering editors - HackMD + + + + + + + + + + + + + + + + + +

Lab 4: Text filtering editors

Copy the following files from /etc to your home root folder

$ cp /etc/fstab ~
+$ cp /etc/passwd ~
+

Part 1: Grep

The grep command searches for lines matching a pattern and prints the matching lines to output.

It is also necessary in some cases to print the lines before or after a match.

+

Regex cheat sheet: https://quickref.me/grep

+

Part 2: AWK

AWK is a language designed for text processing and typically used as a data extraction and reporting tool. It can be used like sed and grep to filter data with additional capabilities. It is a standard feature of most Unix-like operating systems.

Part 3: SED

The sed command (short for stream editor) performs editing operation on text coming from standard input or file. The sed command can be used like grep but it has more functionalities.

Questions to answer

Save the following lines to a file server-data.log.

2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24'
+2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.410.15.0/24'
+2022/09/18 13:25:34 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24'
+2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24'
+2022/09/18 13:25:35 wazuh-remoted: WARNING: Remote syslog not parsed from: '10.110.18.0/24'
+2022/09/18 13:25:35 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24'
+Log1 2022/09/18 13:25:35 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24'
+2022/09/18 13:25:35 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24' END
+2022/09/18 13:25:35 wazuh-remoted: ACTION: none INFO: Remote syslog allowed from: '10.110.15.0/24'
+
+

The following tasks are to be completed with either grep, sed, or awk.
+All actions are to be performed on server-data.log

+
    +
  1. View only error and warning messages in server-data.log. Show how you can do this with grep and awk.
  2. +
  3. View every line except lines with informational messages.
  4. +
  5. Count how many error messages are in the log.
  6. +
  7. Hide the IP addresses. Replace all IP addresses with xxx.xxx.xxx.xxx/xx and save the output to a file newlog.log. Show the output. +
    +

    This simulates a scenario where you want to send your logs to a third-party and you need to hide some information in the log messages.

    +
    +
  8. +
  9. Write a single regular expression to match the following lines in server-data.log. Show the full command and regex used.
    2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24'
    +2022/09/18 13:25:34 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24'
    +2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24'
    +2022/09/18 13:25:35 wazuh-remoted: WARNING: Remote syslog not parsed from: '10.110.18.0/24'
    +2022/09/18 13:25:35 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24'
    +
    +Try to be as strict as possible when matching. Identify all the fields in the logs, find the common patterns in them and match as much as you can. Your regex should validate data where necessary.
    +For example, using the wildcard . to match huge portions of the lines reduces the quality of the regex. +
    +

    Of course, you can use wildcards. Just don’t use them excessively.

    +
    +
  10. +

Bonus

    +
  1. Consider the following log:
    at com.databricks.backend.daemon.data.client.DatabricksFileSystemV2.recordOperation(DatabricksFileSystemV2.scala:474)
    +at com.databricks.backend.daemon.data.client.DBFSV2.initialize(DatabricksFileSystemV2.scala:64)
    +at com.databricks.backend.daemon.data.client.DatabricksFileSystem.initialize(DatabricksFileSystem.scala:222)
    +at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
    +at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
    +at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
    +
    +Write a sed one-liner that will show stack traces lines in the following fashion:
    Exception occured inside method `org.apache.hadoop.fs.FileSystem$Cache.getInternal` from file `FileSystem.java` on line `2703`. The file was written in `java`.
    +
    +Called method org.apache.hadoop.fs.FileSystem$Cache.getInternal which calls line 2703 of file FileSystem.java. The file is written in java.
    +
    +HINT: sed capture groups are extra useful here
  2. +
+ + + + + + + + + diff --git a/week4/newlog.log b/week4/newlog.log new file mode 100644 index 0000000..9344caf --- /dev/null +++ b/week4/newlog.log @@ -0,0 +1,9 @@ +2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx' +2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.410.15.0/24' +2022/09/18 13:25:34 wazuh-remoted: ERROR: Remote syslog blocked from: 'xxx.xxx.xxx.xxx/xx' +2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx' +2022/09/18 13:25:35 wazuh-remoted: WARNING: Remote syslog not parsed from: 'xxx.xxx.xxx.xxx/xx' +2022/09/18 13:25:35 wazuh-remoted: ERROR: Remote syslog blocked from: 'xxx.xxx.xxx.xxx/xx' +Log1 2022/09/18 13:25:35 wazuh-remoted: INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx' +2022/09/18 13:25:35 wazuh-remoted: INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx' END +2022/09/18 13:25:35 wazuh-remoted: ACTION: none INFO: Remote syslog allowed from: 'xxx.xxx.xxx.xxx/xx' diff --git a/week4/server-data.log b/week4/server-data.log new file mode 100644 index 0000000..8baff71 --- /dev/null +++ b/week4/server-data.log @@ -0,0 +1,9 @@ +2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24' +2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.410.15.0/24' +2022/09/18 13:25:34 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24' +2022/09/18 13:25:34 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24' +2022/09/18 13:25:35 wazuh-remoted: WARNING: Remote syslog not parsed from: '10.110.18.0/24' +2022/09/18 13:25:35 wazuh-remoted: ERROR: Remote syslog blocked from: '10.110.18.0/24' +Log1 2022/09/18 13:25:35 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24' +2022/09/18 13:25:35 wazuh-remoted: INFO: Remote syslog allowed from: '10.110.15.0/24' END +2022/09/18 13:25:35 wazuh-remoted: ACTION: none INFO: Remote syslog allowed from: '10.110.15.0/24' diff --git a/week5/lab5-solution.html b/week5/lab5-solution.html new file mode 100644 index 0000000..0800ad0 --- /dev/null +++ b/week5/lab5-solution.html @@ -0,0 +1,306 @@ + + + + + + + +Lab5 Solution Amirlan Sharipov (BS21-CS-01) + + + + + +
+

Lab5 Solution Amirlan Sharipov (BS21-CS-01)

+
+

Table of Contents

+ +
+ +
+

1. Question 1

+
+
+
echo "Username: $(whoami)"
+echo "Home Directory: $HOME"
+echo "Shell: $SHELL"
+echo "Hostname: $(hostname)"
+ipaddress="$(ip addr | grep -A5 "enp2s0f1:" | grep "inet .*" | awk '{print $2}')"
+echo "IP address: $ipaddress"
+
+
+ +
+Username: rinri
+Home Directory: /home/rinri
+Shell: /bin/zsh
+Hostname: akemi
+IP address: 10.244.1.78/24
+
+
+
+ +
+

2. Question 2

+
+
+
HOME=lab5-solution.org # THIS IS DONE TO NOT MAKE HOME BACKUP WHEN EXPORTING
+sudo mkdir -p /var/backups
+FNAME="$(date '+/var/backups/home_backup_%b_%d_%Y_%H_%M_%S.tar.gz')"
+sudo tar caf $FNAME $HOME
+
+
+
+
+ +
+

3. Question 3

+
+
+
uname -svm
+
+w
+
+CUR_USERS="$(who | awk '{print $1}' | sort | uniq)"
+
+if [ -d "/sys/firmware/efi" ]; then
+    echo "EFI"
+else
+    echo "No EFI"
+fi
+
+lsblk -o "+PTTYPE" | sed 's/^\([A-Za-z]*\) \(.*\)gpt/\1*\2gpt/'
+
+efibootmgr | grep "Boot$(efibootmgr | grep "BootOrder" | awk '{print substr($2, 0, 4);}')"
+
+
+ +
+Linux #1 SMP PREEMPT_DYNAMIC Tue, 14 Feb 2023 22:08:08 +0000 x86_64
+ 23:55:56 up  9:44,  2 users,  load average: 1.35, 1.71, 1.34
+USER     TTY        LOGIN@   IDLE   JCPU   PCPU WHAT
+rinri    tty1      14:11    9:44m  8:14   0.00s xinit /home/rinri/.xinitrc -- /etc/X11/xinit/xserverrc :0 vt1 -keeptty -auth /tmp/serverauth.unEBAJdBrv
+root     tty2      14:46    9:09m  0.00s  0.00s -bash
+EFI
+NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS PTTYPE
+sda*     8:0    0 931.5G  0 disk             gpt
+├─sda1   8:1    0   499M  0 part             gpt
+├─sda2   8:2    0   100M  0 part             gpt
+├─sda3   8:3    0    16M  0 part             gpt
+├─sda4   8:4    0  42.4G  0 part             gpt
+├─sda5   8:5    0  38.8G  0 part             gpt
+├─sda6   8:6    0   100M  0 part             gpt
+├─sda7   8:7    0   3.1G  0 part             gpt
+├─sda8   8:8    0   315G  0 part /mnt/rec    gpt
+└─sda9   8:9    0 531.5G  0 part /mnt/data   gpt
+sdb*     8:16   0 465.8G  0 disk             gpt
+├─sdb1   8:17   0    16M  0 part             gpt
+├─sdb2   8:18   0   250G  0 part             gpt
+├─sdb3   8:19   0  64.7G  0 part /           gpt
+├─sdb4   8:20   0  97.2G  0 part /home       gpt
+├─sdb5   8:21   0     8G  0 part [SWAP]      gpt
+├─sdb6   8:22   0   781M  0 part /boot       gpt
+├─sdb7   8:23   0    44G  0 part             gpt
+└─sdb8   8:24   0     1G  0 part             gpt
+Boot0003* Arch BTW	HD(7,GPT,1d75b784-ef17-3c4d-9e74-0b058e17bf83,0x3a1ff800,0x18680f)/File(\vmlinuz-linux)72006f006f0074003d00500041005200540055005500490044003d00660062006500660039003600310033002d0066006200660035002d0038003400340035002d0038006400310063002d00370061003600330037003000390064003100320032003900200072006500730075006d0065003d00500041005200540055005500490044003d00660033003700300031003100360031002d0035006500350036002d0065003000340037002d0061006200640031002d00390063003600380034003100320030006400340061006400200072007700200069006e0069007400720064003d005c0069006e0069007400720061006d00660073002d006c0069006e00750078002e0069006d006700
+
+
+
+
+
+

Author: Amirlan Sharipov (BS21-CS-01)

+

Created: 2023-03-02 Thu 23:55

+
+ + \ No newline at end of file diff --git a/week5/lab5-solution.org b/week5/lab5-solution.org new file mode 100644 index 0000000..27f9c5a --- /dev/null +++ b/week5/lab5-solution.org @@ -0,0 +1,82 @@ +#+title: Lab5 Solution +#+title: Amirlan Sharipov (BS21-CS-01) +#+author: Amirlan Sharipov (BS21-CS-01) +#+PROPERTY: header-args :results verbatim :exports both +#+OPTIONS: ^:nil + +* Question 1 +#+begin_src bash +echo "Username: $(whoami)" +echo "Home Directory: $HOME" +echo "Shell: $SHELL" +echo "Hostname: $(hostname)" +ipaddress="$(ip addr | grep -A5 "enp2s0f1:" | grep "inet .*" | awk '{print $2}')" +echo "IP address: $ipaddress" +#+end_src + +#+RESULTS: +: Username: rinri +: Home Directory: /home/rinri +: Shell: /bin/zsh +: Hostname: akemi +: IP address: 10.244.1.78/24 + +* Question 2 +#+begin_src bash +HOME=lab5-solution.org # THIS IS DONE TO NOT MAKE HOME BACKUP WHEN EXPORTING +sudo mkdir -p /var/backups +FNAME="$(date '+/var/backups/home_backup_%b_%d_%Y_%H_%M_%S.tar.gz')" +sudo tar caf $FNAME $HOME +#+end_src + +#+RESULTS: + +* Question 3 +#+begin_src bash +uname -svm + +w + +CUR_USERS="$(who | awk '{print $1}' | sort | uniq)" + +if [ -d "/sys/firmware/efi" ]; then + echo "EFI" +else + echo "No EFI" +fi + +lsblk -o "+PTTYPE" | sed 's/^\([A-Za-z]*\) \(.*\)gpt/\1*\2gpt/' + +efibootmgr | grep "Boot$(efibootmgr | grep "BootOrder" | awk '{print substr($2, 0, 4);}')" +#+end_src + +#+RESULTS: +#+begin_example +Linux #1 SMP PREEMPT_DYNAMIC Tue, 14 Feb 2023 22:08:08 +0000 x86_64 + 23:43:06 up 9:31, 2 users, load average: 0.94, 1.05, 0.96 +USER TTY LOGIN@ IDLE JCPU PCPU WHAT +rinri tty1 14:11 9:31m 7:29 0.00s xinit /home/rinri/.xinitrc -- /etc/X11/xinit/xserverrc :0 vt1 -keeptty -auth /tmp/serverauth.unEBAJdBrv +root tty2 14:46 8:56m 0.00s 0.00s -bash +EFI +NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS PTTYPE +sda* 8:0 0 931.5G 0 disk gpt +├─sda1 8:1 0 499M 0 part gpt +├─sda2 8:2 0 100M 0 part gpt +├─sda3 8:3 0 16M 0 part gpt +├─sda4 8:4 0 42.4G 0 part gpt +├─sda5 8:5 0 38.8G 0 part gpt +├─sda6 8:6 0 100M 0 part gpt +├─sda7 8:7 0 3.1G 0 part gpt +├─sda8 8:8 0 315G 0 part /mnt/rec gpt +└─sda9 8:9 0 531.5G 0 part /mnt/data gpt +sdb* 8:16 0 465.8G 0 disk gpt +├─sdb1 8:17 0 16M 0 part gpt +├─sdb2 8:18 0 250G 0 part gpt +├─sdb3 8:19 0 64.7G 0 part / gpt +├─sdb4 8:20 0 97.2G 0 part /home gpt +├─sdb5 8:21 0 8G 0 part [SWAP] gpt +├─sdb6 8:22 0 781M 0 part /boot gpt +├─sdb7 8:23 0 44G 0 part gpt +└─sdb8 8:24 0 1G 0 part gpt +Boot0003* Arch BTW HD(7,GPT,1d75b784-ef17-3c4d-9e74-0b058e17bf83,0x3a1ff800,0x18680f)/File(\vmlinuz-linux)72006f006f0074003d00500041005200540055005500490044003d00660062006500660039003600310033002d0066006200660035002d0038003400340035002d0038006400310063002d00370061003600330037003000390064003100320032003900200072006500730075006d0065003d00500041005200540055005500490044003d00660033003700300031003100360031002d0035006500350036002d0065003000340037002d0061006200640031002d00390063003600380034003100320030006400340061006400200072007700200069006e0069007400720064003d005c0069006e0069007400720061006d00660073002d006c0069006e00750078002e0069006d006700 +#+end_example diff --git a/week5/lab5.html b/week5/lab5.html new file mode 100644 index 0000000..a8bacf4 --- /dev/null +++ b/week5/lab5.html @@ -0,0 +1,877 @@ + + + + + + + + + + + + + Lab 5: Bash scripting - HackMD + + + + + + + + + + + + + + + + + +

Lab 5: Bash scripting

Exercise 1 - Bash basics

Task 1 - Bash scripting basics

Task 2 - Bash loops and conditions

Task 3 - If statements

Task 4 - Bash functions

Exercise 2 - Working with files and directories

Task 1 - File and directory test operators

There are several options in bash to check the type of file you are interacting with. In many cases, the options are also used to check for the existence of a specified file or directory. The example below shows the options that can be used.

Task 2 - Directory and file manipulation

Task 3 - Jump directories

Sometimes it is difficult to navigate directories with the possibly infinite number of parent directories we need to provide. For example cd ../../../../../. Let’s create a script that will help us jump to a specified directory without executing cd ../.

Exercise 3 - Hash tables and more bash usage

Task 1 - Hash tables in bash

A dictionary, or a hashmap, or an associative array is a data structure used to store a collection of things. A dictionary consists of a collection of key-value pairs. Each key is mapped to its associated value.

Task 2 - Use sed in bash

Task 3 - Execute Python commands in bash

Exercise 4 - Debugging bash scripts

Task 1 - Command exit code

You can verify whether a bash command executed successfully by viewing the exit status code of the command. The exit status of the previously executed command is stored in the $? variable. A successful command returns a 0, while an unsuccessful one returns a non-zero value that usually can be interpreted as an error code.

Task 2 - Using set -xe

When there is an error that stops the execution of a bash script, the bash interpreter usually displays the line number that triggered the error. However, in some cases, it might be necessary to trace the flow of execution of the script. This provides more insight into the conditions that are met, and the state of the loops.

Questions to answer

+
    +
  • Upload the shell script files to moodle.
  • +
  • Test every feature in your script and show screenshots of the test in your report.
  • +
+
    +
  1. +

    Write a bash script that displays the following details of the logged-in user from the environment variables:

    +
      +
    • Login username
    • +
    • Home directory
    • +
    • Shell
    • +
    • The hostname of the system
    • +
    • The script should extract the IP address of the system from the ifconfig or ip command. Save the IP address to the ipaddress variable and display it as output.
    • +
    +

    Sample output:

    +
    Username: user1
    +Home Directory: /home/user1
    +Shell: /bin/bash
    +Hostname: ubuntuvm
    +IP address: 10.1.1.1
    +
    +
  2. +
  3. +

    Backups are important in system administration. Create a script that will backup your home directory.

    +
      +
    • The backup file should be compressed to tar.gz.
    • +
    • All files and directory permissions should be preserved in the backup.
    • +
    • The backup destination directory is /var/backups/
    • +
    • The script should create the destination directory if it doesn’t already exist.
    • +
    • The backup file name should take the format home_backup_month_day_year_hour_minute_second.tar.gz.
      +For example home_backup_Feb_18_2023_02_30_02.tar.gz
    • +
    +
    +

    A typical real life scenario is keeping backups of websites. Administrators are usually interested in backing up /var/www/html.

    +
    +
  4. +
  5. +

    Write a bash script that checks various artifacts on the system. The script mainly checks for system information, and OS components. Your script should do the following:

    +
      +
    • Print the OS kernel name and kernel version.
    • +
    • Print the system architecture.
    • +
    • Print all currently logged in users (show the date or time which the users logged in, and show the command line of the users’ current process).
    • +
    • Verify that EFI is enabled and print the relevant output.
    • +
    • List all connected block devices (Bonus: Identify the devices that have the GPT partition by adding an * to them in the output).
    • +
    • List the first boot device on your system. This should be done according to the boot order in the NVRAM.
    • +
    +
    +

    Ensure that the output of your script is neatly formatted and easy to read.
    +Bonus points if you create and use at least three functions.

    +
    +
  6. +

Bonus

    +
  1. Write a bash script that scans the entire system for files that contain the string “/bin/bash”. The script should print only the matches that the currently logged in user has execute permission on.
  2. +
+ + + + + + + + + diff --git a/week6/helloworld.sh b/week6/helloworld.sh new file mode 100755 index 0000000..7bdafee --- /dev/null +++ b/week6/helloworld.sh @@ -0,0 +1,8 @@ +#!/bin/bash + +trap 'echo "Interrupt received"; exit' SIGUSR1 +while : +do + echo "Hello World" + sleep 10 +done diff --git a/week6/lab6-solution.html b/week6/lab6-solution.html new file mode 100644 index 0000000..defacbd --- /dev/null +++ b/week6/lab6-solution.html @@ -0,0 +1,424 @@ + + + + + + + +Lab5 Solution Amirlan Sharipov (BS21-CS-01) + + + + + +
+

Lab5 Solution Amirlan Sharipov (BS21-CS-01)

+ + +
+

1. Question 1

+
+
+
+

1.1. What is a zombie process

+
+

+Zombie processes appear in parent-child process relationships. They are finished executing, but they are still in the process table. For example, when a child process finishes executing, but parent process didn’t acknowledge it yet. +

+
+
+
+

1.2. Finding zombie processes

+
+

+I created a zombie process using C and then run this command: +

+
+
bat zombie.c
+gcc zombie.c
+./a.out &
+ps aux | grep "defunct" | grep -v "grep"
+
+
+ +
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+
+int main() {
+	pid_t p = fork();
+	if (p == 0) {
+		exit(0);
+	} else {
+		sleep(10);
+	}
+}
+rinri     810248  0.0  0.0      0     0 ?        Z    19:11   0:00 [a.out] <defunct>
+
+

+So, using the last command I can find zombie processes. Let’s say that I know that there is only one zombie process. To kill it, let’s kill its parent process: +

+
+
kill -9 $(ps -o ppid= -p $(ps aux | grep -m 1 "defunct" | grep -v "grep" | awk '{print $2;}'))
+
+
+ +

+By using this line, it’s possible to create script to kill all the zombie processes. +

+
+
+ +
+

1.3. kill vs killall vs pkill

+
+

+kill sends the specified signal to the given pid. +killall sends the specified signal to all the processes by the given process name. +pkill sends the specified signal to all the processes matching the given pattern. +

+
+
+ +
+

1.4. top

+
+
+
top -b -n 1 | head -n 5
+
+
+ +
+top - 19:12:03 up  2:35,  1 user,  load average: 2.29, 2.75, 2.98
+Tasks: 299 total,   1 running, 298 sleeping,   0 stopped,   0 zombie
+%Cpu(s): 17.4 us,  6.5 sy,  0.0 ni, 73.2 id,  0.0 wa,  0.0 hi,  2.9 si,  0.0 st
+MiB Mem :  14886.6 total,   6722.9 free,   4607.7 used,   3556.0 buff/cache
+MiB Swap:   8243.0 total,   8243.0 free,      0.0 used.   9791.8 avail Mem 
+
+ +

+(Explaining the first evaluation, results may differ after export) +The Tasks line says, that there is 283 total processes, 1 of them is currently running, others are sleeping. No stopeed and no zombie processes. +The %Cpu(s) line says about processor utilization. +(Taken from the Lab6 file) + us is the percent of time spent running user processes. + sy is the percent of time spent running the kernel. + ni is the percent of time spent running processes with manually configured nice values. + id is the percent of time idle (if high, CPU may be overworked). + wa is the percent of wait time (if high, CPU is waiting for I/O access). + hi is the percent of time managing hardware interrupts. + si is the percent of time managing software interrupts. + st is the percent of virtual CPU time waiting for access to physical CPU. + Values such as id, wa, and st help identify whether the system is overworked. +

+
+
+ +
+

1.5. kill fun processes script

+
+
+
bash -c "exec -a fun${RANDOM}process sleep infinity" &
+bash -c "exec -a fun${RANDOM}process sleep infinity" &
+bash -c "exec -a fun${RANDOM}process sleep infinity" &
+
+FUNPROCS="$(pgrep -f 'fun.*process.*infinity')"
+echo "Found $(echo "$FUNPROCS" | wc -l) processes:"
+echo "$FUNPROCS"
+for FUNPROC in $FUNPROCS
+do
+    kill -9 "$FUNPROC"
+    echo "killed $FUNPROC"
+done
+
+
+ +
+Found 3 processes:
+811145
+811146
+811147
+killed 811145
+killed 811146
+killed 811147
+
+
+
+
+

1.6. Hello world

+
+
+
bat helloworld.sh
+
+
+ +
+#!/bin/bash
+
+trap 'echo "Interrupt received"; exit' SIGUSR1
+while :
+do
+    echo "Hello World"
+    sleep 10
+done
+
+ + +

+To kill: +kill -s SIGUSR1 “$(ps aux | grep ”helloworld.sh“ | grep -v ”grep“ | awk ’{print $2}’)” +

+
+
+
+

1.7. System util

+
+
+
bat status.sh
+
+
+ +
+#!/bin/bash
+
+while :
+do
+    CPUUSAGE="$(top -b - n 1 | grep "Cpu" | awk '{print 100-$8}')%"
+    MEMUSAGE="$(free -h | grep "Mem" | awk '{print $3/$2*100}')%"
+    # disk usage for root directory
+    DISKUSAGE="$(df -h | awk '{ if ($6 == "/")
+    print $5;
+    }')"
+
+    echo "$(date) CPU: $CPUUSAGE Mem: $MEMUSAGE Disk:$DISKUSAGE" >> /var/log/system_utilization.log
+    sleep 15
+done
+
+ +
+
bat /var/log/system_utilization.log
+
+
+ +
+Thu Mar  9 07:09:13 PM MSK 2023 CPU: 43.2% Mem: 32.1429% Disk:70%
+Thu Mar  9 07:09:29 PM MSK 2023 CPU: 45.7% Mem: 32.1429% Disk:70%
+Thu Mar  9 07:09:44 PM MSK 2023 CPU: 34.1% Mem: 32.1429% Disk:70%
+Thu Mar  9 07:09:59 PM MSK 2023 CPU: 28.5% Mem: 32.1429% Disk:70%
+Thu Mar  9 07:10:14 PM MSK 2023 CPU: 22.5% Mem: 32.1429% Disk:70%
+Thu Mar  9 07:10:30 PM MSK 2023 CPU: 34.8% Mem: 32.1429% Disk:70%
+Thu Mar  9 07:10:45 PM MSK 2023 CPU: 37.6% Mem: 32.1429% Disk:70%
+Thu Mar  9 07:11:00 PM MSK 2023 CPU: 29.8% Mem: 32.1429% Disk:70%
+
+
+
+
+
+
+

Author: Amirlan Sharipov (BS21-CS-01)

+

Created: 2023-03-09 Thu 19:12

+
+ + \ No newline at end of file diff --git a/week6/lab6-solution.org b/week6/lab6-solution.org new file mode 100644 index 0000000..d66e8bf --- /dev/null +++ b/week6/lab6-solution.org @@ -0,0 +1,148 @@ +#+title: Lab5 Solution +#+title: Amirlan Sharipov (BS21-CS-01) +#+author: Amirlan Sharipov (BS21-CS-01) +#+PROPERTY: header-args :results verbatim :exports both +#+OPTIONS: ^:nil + +* Question 1 +** What is a zombie process +Zombie processes appear in parent-child process relationships. They are finished executing, but they are still in the process table. For example, when a child process finishes executing, but parent process didn't acknowledge it yet. +** Finding zombie processes +I created a zombie process using C and then run this command: +#+begin_src bash +bat zombie.c +gcc zombie.c +./a.out & +ps aux | grep "defunct" | grep -v "grep" +#+end_src + +#+RESULTS: +#+begin_example +#include +#include +#include + +int main() { + pid_t p = fork(); + if (p == 0) { + exit(0); + } else { + sleep(2); + } +} +rinri 178372 0.0 0.0 0 0 ? Z 17:11 0:00 [a.out] +#+end_example +So, using the last command I can find zombie processes. Let's say that I know that there is only one zombie process. To kill it, let's kill its parent process: +#+begin_src bash +kill -9 $(ps -o ppid= -p $(ps aux | grep -m 1 "defunct" | grep -v "grep" | awk '{print $2;}')) +#+end_src + +#+RESULTS: + +By using this line, it's possible to create script to kill all the zombie processes. + +** kill vs killall vs pkill +kill sends the specified signal to the given pid. +killall sends the specified signal to all the processes by the given process name. +pkill sends the specified signal to all the processes matching the given pattern. + +** top +#+begin_src bash +top -b -n 1 | head -n 5 +#+end_src + +#+RESULTS: +: top - 17:29:37 up 52 min, 1 user, load average: 0.97, 1.02, 1.18 +: Tasks: 283 total, 1 running, 282 sleeping, 0 stopped, 0 zombie +: %Cpu(s): 3.8 us, 3.0 sy, 0.0 ni, 92.5 id, 0.0 wa, 0.0 hi, 0.8 si, 0.0 st +: MiB Mem : 14886.6 total, 9247.6 free, 3192.2 used, 2446.7 buff/cache +: MiB Swap: 8243.0 total, 8243.0 free, 0.0 used. 11272.0 avail Mem +(Explaining the first evaluation, results may differ after export) +The Tasks line says, that there is 283 total processes, 1 of them is currently running, others are sleeping. No stopeed and no zombie processes. +The %Cpu(s) line says about processor utilization. +(Taken from the Lab6 file) + us is the percent of time spent running user processes. + sy is the percent of time spent running the kernel. + ni is the percent of time spent running processes with manually configured nice values. + id is the percent of time idle (if high, CPU may be overworked). + wa is the percent of wait time (if high, CPU is waiting for I/O access). + hi is the percent of time managing hardware interrupts. + si is the percent of time managing software interrupts. + st is the percent of virtual CPU time waiting for access to physical CPU. + Values such as id, wa, and st help identify whether the system is overworked. + +** kill fun processes script +#+begin_src bash +bash -c "exec -a fun${RANDOM}process sleep infinity" & +bash -c "exec -a fun${RANDOM}process sleep infinity" & +bash -c "exec -a fun${RANDOM}process sleep infinity" & + +FUNPROCS="$(pgrep -f 'fun.*process.*infinity')" +echo "Found $(echo "$FUNPROCS" | wc -l) processes:" +echo "$FUNPROCS" +for FUNPROC in $FUNPROCS +do + kill -9 "$FUNPROC" + echo "killed $FUNPROC" +done +#+end_src + +#+RESULTS: +: Found 3 processes: +: 394357 +: 394358 +: 394359 +: killed 394357 +: killed 394358 +: killed 394359 +** Hello world +#+begin_src bash +bat helloworld.sh +#+end_src + +#+RESULTS: +: #!/bin/bash +: +: trap 'echo "Interrupt received"; exit' SIGUSR1 +: while : +: do +: echo "Hello World" +: sleep 10 +: done + +To kill: +kill -s SIGUSR1 "$(ps aux | grep "helloworld.sh" | grep -v "grep" | awk '{print $2}')" +** System util +#+begin_src bash +bat status.sh +#+end_src + +#+RESULTS: +#+begin_example +while : +do + CPUUSAGE="$(top -b - n 1 | grep "Cpu" | awk '{print 100-$8}')%" + MEMUSAGE="$(free -h | grep "Mem" | awk '{print $3/$2*100}')%" + # disk usage for root directory + DISKUSAGE="$(df -h | awk '{ if ($6 == "/") + print $5; + }')" + + echo "$(date) CPU: $CPUUSAGE Mem: $MEMUSAGE Disk:$DISKUSAGE" >> /var/log/system_utilization.log + sleep 15 +done +#+end_example + +#+begin_src bash +bat /var/log/system_utilization.log +#+end_src + +#+RESULTS: +: Thu Mar 9 07:09:13 PM MSK 2023 CPU: 43.2% Mem: 32.1429% Disk:70% +: Thu Mar 9 07:09:29 PM MSK 2023 CPU: 45.7% Mem: 32.1429% Disk:70% +: Thu Mar 9 07:09:44 PM MSK 2023 CPU: 34.1% Mem: 32.1429% Disk:70% +: Thu Mar 9 07:09:59 PM MSK 2023 CPU: 28.5% Mem: 32.1429% Disk:70% +: Thu Mar 9 07:10:14 PM MSK 2023 CPU: 22.5% Mem: 32.1429% Disk:70% +: Thu Mar 9 07:10:30 PM MSK 2023 CPU: 34.8% Mem: 32.1429% Disk:70% +: Thu Mar 9 07:10:45 PM MSK 2023 CPU: 37.6% Mem: 32.1429% Disk:70% +: Thu Mar 9 07:11:00 PM MSK 2023 CPU: 29.8% Mem: 32.1429% Disk:70% diff --git a/week6/lab6.html b/week6/lab6.html new file mode 100644 index 0000000..1c8c6d2 --- /dev/null +++ b/week6/lab6.html @@ -0,0 +1,522 @@ + + + + + + + + + + + + + Lab 6: Processes and signals - HackMD + + + + + + + + + + + + + + + + + +

Lab 6: Processes and signals

Exercise 1: Managing processes

Task 1: Process id and jobs

    +
  • We will start a few processes and manage them through the command line. Open a command shell and change directory to your home. Start the top command and put it into the background. Use & to put the process in the background
    $ top &
    +
    +
  • +
  • Then start a background process called yes and redirect its out to /dev/null (the bit bucket).
    $ yes > /dev/null &
    +
    +
  • +
  • Now let’s start an md5sum process to calculate the md5 hash of the first drive on the system. Notice how this hangs the prompt; it should take a long time to complete this task.
    $ md5sum /dev/sda
    +
    +
  • +
  • Let’s stop the process and run it to the background. To stop the process push CTRL+Z.
  • +
  • Now restart the job in the background. To see the job numbers, use the following. You can see that top process also was stopped in the background
    $ jobs
    +
    +
  • +
+

Starting full screen programs like “nano” or “top” in background are immediately stopped (by a SIGTTIN signal - when they try to read from the tty, or by a SIGTTOU signal when they try to change the tty parameters). Many programs handle that badly.
+About tty and signals: http://www.linusakesson.net/programming/tty/index.php

+
    +
  • Identify id number of the job md5sum from the previous command and run it with bg command
    $ bg 3
    +
    +
  • +
  • Now list the current jobs running and stopped, see the changes
    $ jobs
    +
    +
  • +
  • We can also bring specific jobs to the terminal screen. Afterwards you can terminate the process by CTRL+C
    $ fg 3
    +
    +
  • +
+

CTRL+C – sends to a process by its controlling terminal (by the TTY driver) SIGINT signal to the current foreground job.

+
    +
  • To list the process IDs of the current processes running in the current shell
    $ ps
    +
    +
  • +
  • The fundamental way of controlling processes in Linux is by sending signals to them. There are multiple signals that you can send to a process. To view all the signals, run:
    $ kill -l
    +
    +
  • +
  • Identify the process ID for the yes process, in this example its ID is 27522. To kill this process with a SIGTERM (-15)
    $ kill 27522
    +
    +
  • +
  • If that failed, you can use a SIGKILL (-9)
    $ kill -9 27522
    +
    +
  • +
  • To list all process running on the system, issue the following command
    $ ps -ef
    +
    +
  • +
  • To find the process ID of a specific process named bash
    $ ps -ef | grep bash
    +
    +
  • +
  • Another useful command is the pstree command which shows a tree structure of the cascading process IDs (-p).
    $ pstree -p
    +
    +
  • +
  • When you press the CTRL+C or Break key at your terminal during execution of a shell program, normally that program is immediately terminated, and your command prompt returns. This may not always be desirable. For instance, you may end up leaving a bunch of temporary files that won’t get cleaned up.
  • +
  • Trapping these signals is quite easy, and the trap command has the following syntax:
    trap "commands" signals
    +
    +Here command can be any valid Unix command, or even a user-defined function, and signal can be a list of any number of signals you want to trap.
    +There are two common uses for trap in shell scripts: +
      +
    • Clean up temporary files
    • +
    • Ignore signals
    • +
    +
  • +
  • Let’s create a script with a trap SIGINT. Save the script as sleeper.sh
    + + + +
    #!/bin/bash + +trap "echo SIGINT encountered, Goodbye forever!" SIGINT +echo Hello, I am now going to sleep +sleep infinity +
    +
    +

    The command to execute when the trap is encountered must be in quotes.

    +
    +
  • +
  • Now run sleeper.sh
    $ bash sleeper.sh
    +Hello, I am now going to sleep
    +
    +
  • +
  • Send a SIGINT by pressing CTRL+C on the keyboard. You should have the following output:
    $ bash sleeper.sh
    +Hello, I am now going to sleep
    +^CSIGINT encountered, Goodbye forever!
    +
    +
    +

    Remember, you can also find the process ID and then use kill to send the signal in the form $ kill -signal pid

    +
    +
  • +
  • You can also use trap to ensure the user cannot interrupt the script execution. This feature is important when executing sensitive commands whose interruption may permanently damage the system. The syntax for disabling a signal is:
    trap "command" [signal]
    +
    +Double quotation marks mean that no command will be executed when the signal is received. For example, to trap the SIGINT and SIGABRT signals, type:
    trap "" SIGINT SIGABRT
    +
    +
  • +

Task 2: The proc file system

The /proc/ directory — also called the proc file system — contains a hierarchy of special files which represent the current state of the kernel — allowing applications and users to peer into the kernel’s view of the system.
+Within the /proc/ directory, one can find a wealth of information detailing the system hardware and any processes currently running. In addition, some of the files within the /proc/ directory tree can be manipulated by users and applications to communicate configuration changes to the kernel.

    +
  • +

    You can view the /proc/ virtual files with the command line file readers. For example, view /proc/cpuinfo

    +
    $ cat /proc/cpuinfo
    +
    +

    You should receive output similar to the following:

    +
    processor	: 0
    +vendor_id	: AuthenticAMD
    +cpu family	: 25
    +model		: 80
    +model name	: AMD Ryzen 5 5600H with Radeon Graphics
    +stepping	: 0
    +microcode	: 0xffffffff
    +cpu MHz		: 3293.695
    +cache size	: 512 KB
    +physical id	: 0
    +siblings	: 2
    +core id		: 0
    +cpu cores	: 2
    +apicid		: 0
    +initial apicid	: 0
    +fpu		: yes
    +fpu_exception	: yes
    +cpuid level	: 13
    +wp		: yes
    +
    +

    When viewing different virtual files in the /proc/ file system, some of the information are easily understandable while some are not human-readable. This is in part why utilities exist to pull data from virtual files and display it in a useful way. Examples of these utilities include lspci, apm, free, and top.

    +
  • +
  • +

    Most virtual files within the /proc/ directory are read-only. However, some can be used to adjust settings in the kernel. This is especially true for files in the /proc/sys/ subdirectory.

    +
  • +
  • +

    To change the value of a virtual file, use the echo command and redirect (>) the new value to the file. For example, to change the hostname on the fly, type:

    +
    echo SNALabPC > /proc/sys/kernel/hostname 
    +
    +
  • +
  • +

    Other files act as binary or Boolean switches. Typing $ cat /proc/sys/net/ipv4/ip_forward returns either a 0 or a 1. A 0 indicates that the kernel is not forwarding network packets. Using the echo command to change the value of the ip_forward file to 1 immediately turns packet forwarding on.

    +
  • +
  • +

    On multi-user systems, it is often useful to secure the process directories stored in /proc/ so that they can be viewed only by the root user. You can restrict the access to these directories with the use of the hidepid option.

    +
  • +
  • +

    To change the file system parameters, you can use the mount command with the -o remount option.

    +
    $ sudo mount -o remount,hidepid=value /proc
    +
    +

    Here, value passed to hidepid is one of:

    +
      +
    • 0 (default) — every user can read all world-readable files stored in a process directory.
    • +
    • 1 — users can access only their own process directories. This protects the sensitive files like cmdline, sched, or status from access by non-root users. This setting does not affect the actual file permissions.
    • +
    • 2 — process files are invisible to non-root users. The existence of a process can be learned by other means, but its effective UID and GID is hidden. Hiding these IDs complicates an intruder’s task of gathering information about running processes.
    • +
    +
  • +
  • +

    To make process files accessible only to the root user, type:

    +
    $ sudo mount -o remount,hidepid=1 /proc
    +
    +

    With hidepid=1, a non-root user cannot access the contents of process directories. An attempt to do so fails with the following message:

    +
    $ ls /proc/1/
    +ls: /proc/1/: Operation not permitted
    +
    +

    With hidepid=2 enabled, process directories are made invisible to non-root users:

    +
    $ ls /proc/1/       
    +ls: /proc/1/: No such file or directory
    +
    +
  • +
  • +

    Also, you can specify a user group that will have access to process files even when hidepid is set to 1 or 2. To do this, use the gid option.

    +
    $ sudo mount -o remount,hidepid=value,gid=gid /proc
    +
    +
    +

    You can find system groups and their respective group IDs in /etc/group
    +Replace gid with the specific group id. For members of selected group, the process files will act as if hidepid was set to 0. However, users which are not supposed to monitor the tasks in the whole system should not be added to the group.

    +
    +
  • +

Task 3: top

    +
  • Open a command shell run the top command
    $ top
    +
    +
  • +
+

This opens up a tool that shows the top processes running on your system. This tool can be used to kill processes, renice processes, sort and various other process management. Press the h command to get a list of help.
+Read material:
+https://www.guru99.com/managing-processes-in-linux.html

+
    +
  • +

    By default, top sorts the process list using the %CPU column. To sort processes using a different column, press one of the following keys.

    +
      +
    • M Sort by the %MEM column.
    • +
    • N Sort by PID column.
    • +
    • T Sort by the TIME+ column.
    • +
    • P Sort by the %CPU column.
    • +
    +
  • +
  • +

    To show the process command line instead of just the process name, press c.

    +
  • +
  • +

    The filter feature allows using a filter expression to limit which processes to see in the list. Activate the filter option by pressing o. The program prompts you to enter a filter expression. You can enter the following to filter processes using more than 1% CPU.

    +
    %CPU>1.0
    +
    +
  • +
  • +

    Clear the filters by pressing =

    +
  • +
  • +

    To filter processes by a specific user, specify the -u option when you run the top command

    +
    $ top -u root
    +
    +
  • +
  • +

    The first five lines of the output show some useful statistics
    +

    +
      +
    • top displays uptime information
    • +
    • Tasks displays process status information
    • +
    • %Cpu(s) displays various processor values
    • +
    • MiB Mem displays physical memory utilization
    • +
    • MiB Swap displays virtual memory utilization
    • +
    +
  • +

Uptime
+Top’s first line, top, shows the same information as the uptime command. The first value is the system time. The second value represents how long the system has been up and running, while the third value indicates the current number of users on the system. The final values are the load average for the system.

The load average is broken down into three time increments. The first shows the load for the last one minute, the second for the last five minutes, and the final value for the last 15 minutes. The results are a percentage of CPU load between 0 and 1.0. The processor is likely overworked if 1.0 (or higher) is displayed.

top - 00:49:59 up 1 day, 12:12,  3 users,  load average: 0,63, 0,66, 0,64
+

Tasks
+The second line is the Tasks output, and it’s broken down into five states. These five states display the status of processes on the system:

    +
  • total shows the sum of the processes from any state.
  • +
  • running shows how many processes are handling requests, executing normally, and have CPU access.
  • +
  • sleeping indicates processes awaiting resources, which is a normal state.
  • +
  • stopped reports processes exiting and releasing resources; these send a termination message to the parent process.
  • +
  • zombie refers to a process waiting for its parent process to release it; it may become orphaned if the parent exits first.
    +Zombie processes usually mean an application or service didn’t exit gracefully. A few zombie processes on a long-running system are not usually a problem.
  • +
Tasks: 386 total,   1 running, 384 sleeping,   1 stopped,   0 zombie
+

%Cpu(s)
+Values related to processor utilization are displayed on the third line. They provide insight into exactly what the CPUs are doing.

    +
  • us is the percent of time spent running user processes.
  • +
  • sy is the percent of time spent running the kernel.
  • +
  • ni is the percent of time spent running processes with manually configured nice values.
  • +
  • id is the percent of time idle (if high, CPU may be overworked).
  • +
  • wa is the percent of wait time (if high, CPU is waiting for I/O access).
  • +
  • hi is the percent of time managing hardware interrupts.
  • +
  • si is the percent of time managing software interrupts.
  • +
  • st is the percent of virtual CPU time waiting for access to physical CPU.
    +Values such as id, wa, and st help identify whether the system is overworked.
  • +
%Cpu(s):  4,1 us,  0,4 sy,  0,0 ni, 95,3 id,  0,0 wa,  0,0 hi,  0,1 si,  0,0 st
+

MiB Memory
+The final two lines of top’s output provide information on memory utilization. The first line—MiB Mem—displays physical memory utilization. This value is based on the total amount of physical RAM installed on the system.

MiB Mem :  15967,8 total,    260,9 free,   2749,7 used,  12957,2 buff/cache
+
+

Note: The term mebibyte (and similar units, such as kibibytes and gibibytes) differs slightly from measurements such as megabytes. Mebibytes are based on 1024 units, and megabytes are based on 1000 units (decimal). Most users are familiar with the decimal measurement, but it is not as accurate as the binary form. The top utility reports memory consumption in decimal.

+
    +
  • total shows total installed memory.
  • +
  • free shows available memory.
  • +
  • used shows consumed memory.
  • +
  • buff/cache shows the amount of information buffered to be written.
  • +
+

MiB Swap
+Linux can take advantage of virtual memory when physical memory space is consumed by borrowing storage space from storage disks. The process of swapping data back and forth between physical RAM and storage drives is time-consuming and uses system resources, so it’s best to minimize the use of virtual memory.

MiB Swap:   2048,0 total,   2047,5 free,      0,5 used.  12739,8 avail Mem
+
    +
  • total shows total swap space.
  • +
  • free shows available swap space.
  • +
  • used shows consumed swap space.
  • +
  • buff/cache shows the amount of information cached for future reads.
  • +

In general, a high amount of swap utilization indicates the system does not have enough memory installed for its tasks. The solution is to either increase RAM or decrease the workload.

Task 4: free

free is a popular command used by system administrators on Unix/Linux platforms. It’s a powerful tool that gives insight into the memory usage in human-readable format.
+The man page for this command states that free displays the total amount of free and used memory on the system, including physical and swap space, as well as the buffers and caches used by the kernel. The information is gathered by parsing /proc/meminfo.

    +
  • Run free with the -h option for human-readable output
    free -h
    +               total        used        free      shared  buff/cache   available
    +Mem:            15Gi       2,7Gi       265Mi       149Mi        12Gi        12Gi
    +Swap:          2,0Gi       0,0Ki       2,0Gi
    +
    +
    +
  • +
  • free provides options to display amount of memory in various units. free -b, -k, -m, -g display the amount of memory in bytes, kilobytes, megabytes, gigabytes respectively.
  • +
  • The various columns, displayed by the various releases above, seek to identify the Total, used, free, shared memory. It also seeks to display the memory held in cache and buffers as well.
  • +

Questions to answer

    +
  1. What are zombie processes? How can you find and kill them?
  2. +
  3. What are the differences between kill, killall, and pkill?
  4. +
  5. Run the top command on your system and annotate the data in the Tasks and %Cpu(s) lines of your output. Provide single sentence explanations for each of the data presented in these two lines.
  6. +
  7. Execute the following bash command:
    $ bash -c "exec -a fun${RANDOM}process sleep infinity" &
    +
    +
      +
    • Assume that there are multiple of such processes. To simulate this, you can run the command more than once.
    • +
    • Write a bash script that will locate and kill all the processes created by this command.
    • +
    • Display status messages when one of such processes is found, and when the process is killed. Additionally, display a message when the process is not found.
    • +
    • Your script should work on any machine it is executed on.
    • +
    • Be extremely careful and be as accurate as possible when finding this process. You don’t want to kill the wrong process.
    • +
    +
    +

    Show test results in the form of screenshots.

    +
    +
  8. +
  9. Write a bash script that loops infinitely and prints “Hello world!” every ten seconds. It should print “Interrupt received” when it receives SIGUSR1. +
      +
    • Show the script in your report, and show how you’re sending the signal to it.
    • +
    +
  10. +
  11. Write a bash script to monitor CPU usage, memory usage, and disk space usage. +
      +
    • For testing purposes, the check should execute every 15 seconds.
    • +
    • The usage statistics should be saved to a log file /var/log/system_utilization.log.
    • +
    • One line of log should contain the timestamp, the % of CPU in use, the % of memory in use, and the % of disk space used.
    • +
    • The log should contain descriptive information that will make it easy to understand.
    • +
    +
    +

    Show log samples created by this script in your report.

    +
    +
  12. +
+ + + + + + + + + diff --git a/week6/status.sh b/week6/status.sh new file mode 100644 index 0000000..725b15a --- /dev/null +++ b/week6/status.sh @@ -0,0 +1,14 @@ +#!/bin/bash + +while : +do + CPUUSAGE="$(top -b - n 1 | grep "Cpu" | awk '{print 100-$8}')%" + MEMUSAGE="$(free -h | grep "Mem" | awk '{print $3/$2*100}')%" + # disk usage for root directory + DISKUSAGE="$(df -h | awk '{ if ($6 == "/") + print $5; + }')" + + echo "$(date) CPU: $CPUUSAGE Mem: $MEMUSAGE Disk:$DISKUSAGE" >> /var/log/system_utilization.log + sleep 15 +done diff --git a/week6/zombie.c b/week6/zombie.c new file mode 100644 index 0000000..2454981 --- /dev/null +++ b/week6/zombie.c @@ -0,0 +1,12 @@ +#include +#include +#include + +int main() { + pid_t p = fork(); + if (p == 0) { + exit(0); + } else { + sleep(2); + } +} diff --git a/week7/backup-anacron.sh b/week7/backup-anacron.sh new file mode 100755 index 0000000..c24cdfa --- /dev/null +++ b/week7/backup-anacron.sh @@ -0,0 +1,4 @@ +#!/bin/bash +rm /home/rinri/edu/sna/funny-dir_* +FNAME="$(date '+/home/rinri/edu/sna/funny-dir_%b_%d_%Y_%H_%M_%S.tar.gz')" +tar caf "$FNAME" /home/rinri/edu/sna/funny-dir diff --git a/week7/backup-nginx.sh b/week7/backup-nginx.sh new file mode 100755 index 0000000..f45da9b --- /dev/null +++ b/week7/backup-nginx.sh @@ -0,0 +1,4 @@ +#!/bin/bash +rm /home/rinri/edu/sna/nginx-www_* +FNAME="$(date '+/home/rinri/edu/sna/nginx-www_%b_%d_%Y_%H_%M_%S.tar.gz')" +tar caf "$FNAME" /var/www/ diff --git a/week7/backup.sh b/week7/backup.sh new file mode 100755 index 0000000..8fc98e3 --- /dev/null +++ b/week7/backup.sh @@ -0,0 +1,2 @@ +FNAME="$(date '+/home/rinri/edu/sna/funny-dir_%b_%d_%Y_%H_%M_%S.tar.gz')" +tar caf "$FNAME" /home/rinri/edu/sna/funny-dir diff --git a/week7/funny-dir/hanyuu01.png b/week7/funny-dir/hanyuu01.png new file mode 100644 index 0000000..b1f9f8d Binary files /dev/null and b/week7/funny-dir/hanyuu01.png differ diff --git a/week7/lab7-solution.html b/week7/lab7-solution.html new file mode 100644 index 0000000..f4b9624 --- /dev/null +++ b/week7/lab7-solution.html @@ -0,0 +1,327 @@ + + + + + + + +Lab7 Solution Amirlan Sharipov (BS21-CS-01) + + + + + +
+

Lab7 Solution Amirlan Sharipov (BS21-CS-01)

+
+

Table of Contents

+ +
+ +
+

1. Question 1

+
+
+
+

1.1. Cron job

+
+

+0 0 */5 * * /home/rinri/edu/sna/backup.sh +

+ +
+
cat backup.sh
+
+
+ +
+FNAME="$(date '+/home/rinri/edu/sna/funny-dir_%b_%d_%Y_%H_%M_%S.tar.gz')"
+tar caf "$FNAME" /home/rinri/edu/sna/funny-dir
+
+
+
+ +
+

1.2. Anacron

+
+

+1 10 backup-anacron /home/rinri/edu/sna/backup-anacron.sh +

+ +
+
cat backup-anacron.sh
+
+
+ +
+#!/bin/bash
+rm /home/rinri/edu/sna/funny-dir_*
+FNAME="$(date '+/home/rinri/edu/sna/funny-dir_%b_%d_%Y_%H_%M_%S.tar.gz')"
+tar caf "$FNAME" /home/rinri/edu/sna/funny-dir
+
+
+
+
+ +
+

2. Question 2

+
+
+
+

2.1. Cron job

+
+

+0 0 * * 0 /home/rinri/edu/sna/backup-nginx.sh +

+ +
+
cat backup-nginx.sh
+
+
+ +
+#!/bin/bash
+rm /home/rinri/edu/sna/nginx-www_*
+FNAME="$(date '+/home/rinri/edu/sna/nginx-www_%b_%d_%Y_%H_%M_%S.tar.gz')"
+tar caf "$FNAME" /var/www/
+
+
+
+
+ +
+

3. Question 3

+
+

+5 * * * * /home/rinri/edu/sna/log-info.sh “5 minutes after midnight everyday” +

+ +

+0 10 * * 1-5 /home/rinri/edu/sna/log-info.sh “10:00 on weekdays” +

+ +

+0 4 * * 1 /home/rinri/edu/sna/log-info.sh “4:00 on Monday” +

+ +

+0 0 8-14 * 6 /home/rinri/edu/sna/log-info.sh “second saturday of the month” +

+ +
+
cat log-info.sh
+
+
+ +
+#!/bin/bash
+echo "$(date '+%d-%m-%y %H:%M:%S') $1" >> /var/log/sna_cron.log
+
+
+
+
+
+

Author: Amirlan Sharipov (BS21-CS-01)

+

Created: 2023-03-16 Thu 23:57

+
+ + \ No newline at end of file diff --git a/week7/lab7-solution.org b/week7/lab7-solution.org new file mode 100644 index 0000000..1fe7551 --- /dev/null +++ b/week7/lab7-solution.org @@ -0,0 +1,61 @@ +#+title: Lab7 Solution +#+title: Amirlan Sharipov (BS21-CS-01) +#+author: Amirlan Sharipov (BS21-CS-01) +#+PROPERTY: header-args :results verbatim :exports both +#+OPTIONS: ^:nil + +* Question 1 +** Cron job +0 0 */5 * * /home/rinri/edu/sna/backup.sh + +#+begin_src bash +cat backup.sh +#+end_src + +#+RESULTS: +: FNAME="$(date '+/home/rinri/edu/sna/funny-dir_%b_%d_%Y_%H_%M_%S.tar.gz')" +: tar caf "$FNAME" /home/rinri/edu/sna/funny-dir + +** Anacron +1 10 backup-anacron /home/rinri/edu/sna/backup-anacron.sh + +#+begin_src bash +cat backup-anacron.sh +#+end_src + +#+RESULTS: +: #!/bin/bash +: rm /home/rinri/edu/sna/funny-dir_* +: FNAME="$(date '+/home/rinri/edu/sna/funny-dir_%b_%d_%Y_%H_%M_%S.tar.gz')" +: tar caf "$FNAME" /home/rinri/edu/sna/funny-dir + +* Question 2 +** Cron job +0 0 * * 0 /home/rinri/edu/sna/backup-nginx.sh + +#+begin_src bash +cat backup-nginx.sh +#+end_src + +#+RESULTS: +: #!/bin/bash +: rm /home/rinri/edu/sna/nginx-www_* +: FNAME="$(date '+/home/rinri/edu/sna/nginx-www_%b_%d_%Y_%H_%M_%S.tar.gz')" +: tar caf "$FNAME" /var/www/ + +* Question 3 +5 * * * * /home/rinri/edu/sna/log-info.sh "5 minutes after midnight everyday" + +0 10 * * 1-5 /home/rinri/edu/sna/log-info.sh "10:00 on weekdays" + +0 4 * * 1 /home/rinri/edu/sna/log-info.sh "4:00 on Monday" + +0 0 8-14 * 6 /home/rinri/edu/sna/log-info.sh "second saturday of the month" + +#+begin_src bash +cat log-info.sh +#+end_src + +#+RESULTS: +: #!/bin/bash +: echo "$(date '+%d-%m-%y %H:%M:%S') $1" >> /var/log/sna_cron.log diff --git a/week7/lab7.html b/week7/lab7.html new file mode 100644 index 0000000..f4104ec --- /dev/null +++ b/week7/lab7.html @@ -0,0 +1,402 @@ + + + + + + + + + + + + + Lab 7: Scheduling tasks - HackMD + + + + + + + + + + + + + + + + + +

Lab 7: Scheduling tasks

Task 1: Create cron jobs

    +
  • Let’s create an example script job.sh. We can check log.txt at any time to see whether our scheduled job has run. Create this script in your home directory
    +
    #!/bin/bash +echo `date +"%Y-%M-%d %T"`" - Hello world" >> /var/log/log.txt +
    +
  • +

Adding the job to the user crontab

    +
  • To understand the user crontab, let’s add the script to it manually
    $ crontab -e
    +
    +
  • +
  • This command will open an editor to edit the existing user crontab. Let’s append our cron expression: +
    +

    Replace the <username> with your username

    +
    +
    30 0 * * * /home/<username>/job.sh
    +
    +This schedules the script to run every day, 30 minutes after midnight.
  • +
  • We also need to be sure that the current user has execute permissions for this script. So, let’s use the chmod command to add them:
    $ chmod u+x /home/<username>/job.sh
    +
    +Now, job.sh is scheduled and will run every day. We can test this by inspecting the log.txt file
  • +

Adding the job to the system crontab
+To understand the system crontab, let’s also add this script to it manually.

    +
  • The system crontab file is kept in /etc/crontab. Let’s append the following line:
    30 0 * * * root /home/<username>/job.sh
    +
    +We should note that we need to specify the root username. This is because jobs in system cron are system jobs and will be run by the root user.
  • +

Script for adding the job to the user crontab
+Now let’s try automating the process to add to the user crontab. Install a new file to crontab.

    +
  • Let’s first create a new script file:
    $ touch /home/<username>/myScript.sh
    +
    +
  • +
  • The first thing our script will do is take a copy of all the current jobs. Do not forget to add executable rights for the script to work
    +
    #!/bin/bash +crontab -l > crontab_new +
    +We now have all the previous jobs in the crontab_new file. This means we can append our new job to it and then rewrite the crontab by using the edited file as an input argument to the crontab command:
    $ echo "30 0 * * * /home/<username>/job.sh" >> crontab_new
    +$ crontab crontab_new
    +
    +
  • +
  • Since the crontab_new file is temporary, we can remove it:
    $ rm crontab_new
    +
    +This method works well, though it does require the use of a temporary file. The main idea here is to add multiple tasks to the existing user jobs. Let’s see if we can optimize it further.
  • +

Optimize the previous script by using a pipe
+Our previous script relied on a temporary file and had to tidy it up. It also didn’t check whether the cron entry was already installed, and thus, it could install a duplicate entry if executed multiple times.

    +
  • We can address both of these by using a pipe-based script. If crontab command has a dash -, the crontab data is read from standard the input
    +
    #!/bin/bash +(crontab -l; echo "30 0 * * * /home/<username>/job.sh") | sort -u | crontab - +
    +As before, the crontab -l and echo commands write out the previous lines of the crontab as well as the new entry. These are piped through the sort command to remove duplicate lines. The -u option in sort is for keeping only unique lines.
    +The result of this is piped into the crontab command, which rewrites the crontab file with the new entries.
    +We should be aware, though, that using sort will completely reorder the file, including any comments. sort -u is pretty easy to understand in a script, but we can achieve a less destructive de-duplication with awk:
  • +
+
#!/bin/bash +(crontab -l; echo "30 0 * * * /home/<username>/job.sh")|awk '!x[$0]++'|crontab - +

This will remove all duplicates from the crontab without sorting it.

    +
  • The syntax of the awk command used is explained below: +
      +
    • a[$0] - uses the current line $0 as key to the array, taking the value stored there. If this particular key was never referenced before, a[$0] evaluates to the empty string.
    • +
    • The ! negates the value from before. If it was empty or zero (false), we now have a true result. If it was non-zero (true), we have a false result. If the whole expression evaluated to true, the whole line is printed as the default action print $0
    • +
    • ++ increment the value of a[$0]
    • +
    +
  • +

Using system crontab

    +
  • First, let’s create a new script:
    $ touch /home/$USER/myScript2.sh
    +
    +The syntax of the system schedule line is similar to the user schedule. We just need to specify the root username in the schedule line
    +
    #!/bin/bash +sudo /bin/bash -c 'echo "30 0 * * * root /home/<username>/job.sh" >> /etc/crontab' +
    +We’re using sudo /bin/bash before echo because the user needs root access to both echo and redirect as the root user. Otherwise, we’ll get a permission denied error because just echo will run as root and the redirection will be made with the current user’s permission. The -c option tells bash to get the command in single quotes as a string and run it in a shell.
    +Note that this is plain file manipulation, compared with the crontab command used earlier. We can add similar filters like sort or awk if we want to avoid duplicate entries.
  • +

Using the /etc/cron.d directory
+Besides the /etc/crontab path, cron considers all the files in the /etc/cron.d directory as system jobs too. So, we can also put the schedule line in a new file in the /etc/cron.d directory.

    +
  • +

    Let’s now make another script for adding a job to the cron.d directory, as an alternative to the /etc/crontab file:

    +
    $ touch /home/<username>/myScript3.sh
    +
    +

    We need to put the schedule line in a new file in the cron.d directory — we’ll call our file schedule. Note that in /etc/cron.d, some filenames are considered invalid. For example, if we choose schedule.sh for the filename, it will be skipped because the filename should not have any extension:

    +
    +
    #!/bin/bash +sudo touch /etc/cron.d/schedule +
    +

    The cron.d directory and its sub-directories are usually used by system services, and only the root user can have access to these directories. Also, the files in /etc/cron.d must be owned by root. So, we need to use sudo.

    +
  • +
  • +

    Let’s now add our schedule line to the schedule file and change the permissions.

    +
    $ sudo /bin/bash -c 'echo "30 0 * * * root /home/<username>/job.sh" > /etc/cron.d/schedule'
    +$ sudo chmod 600 /etc/cron.d/schedule
    +
    +

    Note that we change the file’s permissions to a minimum 600. This is because files in /etc/cron.d must not be writable by group or other. Otherwise, they will be ignored. Also, the schedule files under /etc/cron.d do not need to be executable. So, we don’t need permission 700.

    +
  • +

Task 2: at command

at is a command-line utility that allows you to schedule commands to be executed at a particular time. Jobs created with at are executed only once.

    +
  • Install at from the repository
    $ sudo apt install at
    +
    +
  • +
  • Once the program is installed make sure atd, the scheduling daemon is running and set to start on boot:
    $ sudo systemctl enable --now atd
    +
    +
  • +
  • The simplified syntax for the at command is as follows:
    $ at [OPTION...] runtime
    +
    +
  • +
  • Let’s create a job that will be executed at 9:00 am:
    $ at 09:00
    +
    +Once you hit Enter, you’ll be presented with the at command prompt that most often starts with at>. You also see a warning that tells you the shell in which the command will run:
    warning: commands will be executed using /bin/sh
    +at>
    +
    +Enter one or more command you want to execute:
    at> tar -xf $HOME/file.tar.gz
    +
    +When you’re done entering the commands, press CTRL+D to exit the prompt and save the job:
    at> <EOT>
    +job 1 at Mon Oct 17 09:00:00 2022
    +
    +
  • +
  • There are also other ways to pass the command you want to run, besides entering the command in the at prompt. One way is to use echo and pipe the command to at:
    $ echo "command_to_be_run" | at 09:00
    +
    +
  • +
  • Another option is to use the redirect
    $ at 09:00 <<END
    +command_to_be_run
    +END
    +
    +
  • +
  • To read the commands from a file instead of the standard input, invoke the command with -f option following by the path to the file. For example, to create a job that will run the script $HOME/script.sh:
    $ at 09:00 -f $HOME/script.sh
    +
    +
  • +

Questions to answer

+

Upload the scripts you create to moodle.
+Show test results of all implementation in your report.

+
    +
  1. +

    Create backup for any directory with files inside.

    +
      +
    • Create a cron job which backs up the directories at the 5th day of every month.
    • +
    • Create an anacron job that backs up the directories daily. The anacron job should delete old backups.
    • +
    +
  2. +
  3. +

    Install nginx and create a cron job that backs up the directory that contains index.html.

    +
      +
    • The backup should occur at midnight every Sunday.
    • +
    • The job should delete old or previous backups.
    • +
    +
  4. +
  5. +

    Create cron jobs that appends the current time and a descriptive information about the job to the log file at /var/log/sna_cron.log. You should meet the following requirements:

    +
      +
    • Use /bin/bash to run commands instead of the default /bin/sh
    • +
    • Schedule the following jobs: +
        +
      • Run five minutes after midnight, everyday
      • +
      • Run at 10:00 on weekdays
      • +
      • Run at 04:00 every Monday
      • +
      • Run on the second saturday of every month.
      • +
      +
    • +
    +
    +

    Example output written to the log file when the first job executes is shown below

    +
    14-10-22 00:05:00 Run five minutes after midnight
    +
    +
    +
  6. +

Bonus

    +
  1. How can cron jobs be abused? +
      +
    • Give one specific real life example where cron job was abused.
    • +
    • Provide details about the job that was scheduled which led to the abuse. Details should include job execution frequency, command/script scheduled, and the objective(s) of the job.
    • +
    • Show the job you are describing. For example * * * * * /var/tmp/.ICE-unix/-l/sh >/dev/null 2>&1 +
      +

      Be brief in your explanations. Answer this question with a maximum of six sentences.

      +
      +
    • +
    +
  2. +
+ + + + + + + + + diff --git a/week7/log-info.sh b/week7/log-info.sh new file mode 100755 index 0000000..2797f84 --- /dev/null +++ b/week7/log-info.sh @@ -0,0 +1,2 @@ +#!/bin/bash +echo "$(date '+%d-%m-%y %H:%M:%S') $1" >> /var/log/sna_cron.log diff --git a/week8/lab8-solution.html b/week8/lab8-solution.html new file mode 100644 index 0000000..c9f8221 --- /dev/null +++ b/week8/lab8-solution.html @@ -0,0 +1,466 @@ + + + + + + + +Lab8 Solution Amirlan Sharipov (BS21-CS-01) + + + + + +
+

Lab8 Solution Amirlan Sharipov (BS21-CS-01)

+ + +
+

1. Question 1

+
+
+
systemd-analyze
+systemd-analyze plot > systemd.svg
+
+
+ +
+Startup finished in 5.093s (firmware) + 122ms (loader) + 7.459s (kernel) + 4.545s (userspace) = 17.221s 
+graphical.target reached after 4.544s in userspace.
+
+ + + +
+

systemd.svg +

+
+
+
+ +
+

2. Question 2

+
+

+graphical.target -> multi-user.target -> basic.target -> sysinit.target +

+ +
+
cat /usr/lib/systemd/system/graphical.target
+cat /usr/lib/systemd/system/multi-user.target
+cat /usr/lib/systemd/system/basic.target
+cat /usr/lib/systemd/system/sysinit.target
+
+
+ +
+#  SPDX-License-Identifier: LGPL-2.1-or-later
+#
+#  This file is part of systemd.
+#
+#  systemd is free software; you can redistribute it and/or modify it
+#  under the terms of the GNU Lesser General Public License as published by
+#  the Free Software Foundation; either version 2.1 of the License, or
+#  (at your option) any later version.
+
+[Unit]
+Description=Graphical Interface
+Documentation=man:systemd.special(7)
+Requires=multi-user.target
+Wants=display-manager.service
+Conflicts=rescue.service rescue.target
+After=multi-user.target rescue.service rescue.target display-manager.service
+AllowIsolate=yes
+#  SPDX-License-Identifier: LGPL-2.1-or-later
+#
+#  This file is part of systemd.
+#
+#  systemd is free software; you can redistribute it and/or modify it
+#  under the terms of the GNU Lesser General Public License as published by
+#  the Free Software Foundation; either version 2.1 of the License, or
+#  (at your option) any later version.
+
+[Unit]
+Description=Multi-User System
+Documentation=man:systemd.special(7)
+Requires=basic.target
+Conflicts=rescue.service rescue.target
+After=basic.target rescue.service rescue.target
+AllowIsolate=yes
+#  SPDX-License-Identifier: LGPL-2.1-or-later
+#
+#  This file is part of systemd.
+#
+#  systemd is free software; you can redistribute it and/or modify it
+#  under the terms of the GNU Lesser General Public License as published by
+#  the Free Software Foundation; either version 2.1 of the License, or
+#  (at your option) any later version.
+
+[Unit]
+Description=Basic System
+Documentation=man:systemd.special(7)
+Requires=sysinit.target
+Wants=sockets.target timers.target paths.target slices.target
+After=sysinit.target sockets.target paths.target slices.target tmp.mount
+
+# We support /var, /tmp, /var/tmp, being on NFS, but we don't pull in
+# remote-fs.target by default, hence pull them in explicitly here. Note that we
+# require /var and /var/tmp, but only add a Wants= type dependency on /tmp, as
+# we support that unit being masked, and this should not be considered an error.
+RequiresMountsFor=/var /var/tmp
+Wants=tmp.mount
+#  SPDX-License-Identifier: LGPL-2.1-or-later
+#
+#  This file is part of systemd.
+#
+#  systemd is free software; you can redistribute it and/or modify it
+#  under the terms of the GNU Lesser General Public License as published by
+#  the Free Software Foundation; either version 2.1 of the License, or
+#  (at your option) any later version.
+
+[Unit]
+Description=System Initialization
+Documentation=man:systemd.special(7)
+
+Wants=local-fs.target swap.target
+After=local-fs.target swap.target
+Conflicts=emergency.service emergency.target
+Before=emergency.service emergency.target
+
+
+ +
+

2.1. Information taken from man pages:

+
+
+
+

2.1.1. basic.target

+
+

+A special target unit covering basic boot-up. +

+
+
+
+

2.1.2. graphical.target

+
+

+A special target unit for setting up a graphical login screen. This pulls in multi-user.target. +

+
+
+
+

2.1.3. multi-user.target

+
+

+A special target unit for setting up a multi-user system (non-graphical). This is pulled in by graphical.target. +

+
+
+
+

2.1.4. sysinit.target

+
+

+This target pulls in the services required for system initialization. +

+
+
+
+
+

2.2. Wants for sysinit.target

+
+

+Wants=local-fs.target swap.target +Wants is used like Requires, but it’s less strict, meaning that it can start if the “wanted” service doesn’t exist or failed to start. +In this case, it wants local filesystems and swap to start. +

+
+
+
+ +
+

3. Question 3

+
+
+
cat webserver.sh
+
+
+ +
+#!/bin/bash
+
+while true ; do
+    STATS="<h1>Uptime</h1>$(uptime)\n<h1>Inode and disk usage</h1>$(df -ih)\n<h1>Mem usage</h1>$(free -h)\n<h1>Syslog</h1>$(tail -n 15 /var/log/syslog)\r\n\r\n"
+    LEN=$(printf "%s" "$STATS" | wc -c)
+    RES="HTTP/1.1 200OK\r\nContent-Length: $LEN\r\n\r\n"
+    echo -e "$RES$STATS"| nc -l -p 1500;
+done
+
+ + +

+[Unit] +Description=stats web server +

+ +

+[Service] +User=root +ExecStart=/home/rinri/edu/sna/webserver.sh +Restart=always +CPUQuota=15% +MemoryMax=256000000 +

+ +

+[Install] +WantedBy=multi-user.target +

+
+
+ +
+

4. Question 4

+
+
+
cat update.sh
+
+
+ +
+#!/bin/bash
+
+pacman -Sy
+
+ + +

+[Unit] +Description=update package sources list +

+ +

+[Service] +User=root +ExecStart=/home/rinri/edu/sna/update.sh +

+ +

+[Install] +WantedBy=multi-user.target +

+
+
+
+
+

Author: Amirlan Sharipov (BS21-CS-01)

+

Created: 2023-03-23 Thu 23:57

+
+ + \ No newline at end of file diff --git a/week8/lab8-solution.org b/week8/lab8-solution.org new file mode 100644 index 0000000..9ec8a5b --- /dev/null +++ b/week8/lab8-solution.org @@ -0,0 +1,161 @@ +#+title: Lab8 Solution +#+title: Amirlan Sharipov (BS21-CS-01) +#+author: Amirlan Sharipov (BS21-CS-01) +#+PROPERTY: header-args :results verbatim :exports both +#+OPTIONS: ^:nil + +* Question 1 +#+begin_src bash +systemd-analyze +systemd-analyze plot > systemd.svg +#+end_src + +#+RESULTS: +: Startup finished in 5.093s (firmware) + 122ms (loader) + 7.459s (kernel) + 4.545s (userspace) = 17.221s +: graphical.target reached after 4.544s in userspace. + +#+ATTR_HTML: :width 1000px +[[./systemd.svg]] + +* Question 2 +graphical.target -> multi-user.target -> basic.target -> sysinit.target + +#+begin_src bash +cat /usr/lib/systemd/system/graphical.target +cat /usr/lib/systemd/system/multi-user.target +cat /usr/lib/systemd/system/basic.target +cat /usr/lib/systemd/system/sysinit.target +#+end_src + +#+RESULTS: +#+begin_example +# SPDX-License-Identifier: LGPL-2.1-or-later +# +# This file is part of systemd. +# +# systemd is free software; you can redistribute it and/or modify it +# under the terms of the GNU Lesser General Public License as published by +# the Free Software Foundation; either version 2.1 of the License, or +# (at your option) any later version. + +[Unit] +Description=Graphical Interface +Documentation=man:systemd.special(7) +Requires=multi-user.target +Wants=display-manager.service +Conflicts=rescue.service rescue.target +After=multi-user.target rescue.service rescue.target display-manager.service +AllowIsolate=yes +# SPDX-License-Identifier: LGPL-2.1-or-later +# +# This file is part of systemd. +# +# systemd is free software; you can redistribute it and/or modify it +# under the terms of the GNU Lesser General Public License as published by +# the Free Software Foundation; either version 2.1 of the License, or +# (at your option) any later version. + +[Unit] +Description=Multi-User System +Documentation=man:systemd.special(7) +Requires=basic.target +Conflicts=rescue.service rescue.target +After=basic.target rescue.service rescue.target +AllowIsolate=yes +# SPDX-License-Identifier: LGPL-2.1-or-later +# +# This file is part of systemd. +# +# systemd is free software; you can redistribute it and/or modify it +# under the terms of the GNU Lesser General Public License as published by +# the Free Software Foundation; either version 2.1 of the License, or +# (at your option) any later version. + +[Unit] +Description=Basic System +Documentation=man:systemd.special(7) +Requires=sysinit.target +Wants=sockets.target timers.target paths.target slices.target +After=sysinit.target sockets.target paths.target slices.target tmp.mount + +# We support /var, /tmp, /var/tmp, being on NFS, but we don't pull in +# remote-fs.target by default, hence pull them in explicitly here. Note that we +# require /var and /var/tmp, but only add a Wants= type dependency on /tmp, as +# we support that unit being masked, and this should not be considered an error. +RequiresMountsFor=/var /var/tmp +Wants=tmp.mount +# SPDX-License-Identifier: LGPL-2.1-or-later +# +# This file is part of systemd. +# +# systemd is free software; you can redistribute it and/or modify it +# under the terms of the GNU Lesser General Public License as published by +# the Free Software Foundation; either version 2.1 of the License, or +# (at your option) any later version. + +[Unit] +Description=System Initialization +Documentation=man:systemd.special(7) + +Wants=local-fs.target swap.target +After=local-fs.target swap.target +Conflicts=emergency.service emergency.target +Before=emergency.service emergency.target +#+end_example + +** Information taken from man pages: +*** basic.target +A special target unit covering basic boot-up. +*** graphical.target +A special target unit for setting up a graphical login screen. This pulls in multi-user.target. +*** multi-user.target +A special target unit for setting up a multi-user system (non-graphical). This is pulled in by graphical.target. +*** sysinit.target +This target pulls in the services required for system initialization. +** Wants for sysinit.target +Wants=local-fs.target swap.target +Wants is used like Requires, but it's less strict, meaning that it can start if the "wanted" service doesn't exist or failed to start. +In this case, it wants local filesystems and swap to start. + +* Question 3 +#+begin_src bash +cat webserver.sh +#+end_src + +#+RESULTS: +: #!/bin/bash +: +: while true ; do +: STATS="

Uptime

$(uptime)\n

Inode and disk usage

$(df -ih)\n

Mem usage

$(free -h)\n

Syslog

$(tail -n 15 /var/log/syslog)\r\n\r\n" +: LEN=$(printf "%s" "$STATS" | wc -c) +: RES="HTTP/1.1 200OK\r\nContent-Length: $LEN\r\n\r\n" +: echo -e "$RES$STATS"| nc -l -p 1500; +: done + +[Unit] +Description=stats web server + +[Service] +User=root +ExecStart=/home/rinri/edu/sna/webserver.sh +Restart=always +CPUQuota=15% +MemoryMax=256000000 + +[Install] +WantedBy=multi-user.target + +* Question 4 +#+begin_src bash +cat update.sh +#+end_src + +[Unit] +Description=update package sources list + +[Service] +User=root +ExecStart=/home/rinri/edu/sna/update.sh + +[Install] +WantedBy=multi-user.target diff --git a/week8/lab8.html b/week8/lab8.html new file mode 100644 index 0000000..16992d5 --- /dev/null +++ b/week8/lab8.html @@ -0,0 +1,458 @@ + + + + + + + + + + + + + Lab 8: Systemd - HackMD + + + + + + + + + + + + + + + + + +

Lab 8: Systemd

Task 1: Create a shell script

    +
  • Create a custom web server with bash. The web server displays the processes running on the server by executing the top command.
    $ sudo vim /usr/bin/script.sh 
    +
    +
  • +
  • Add the following to the file.
    + + + + + + + + +
    #!/bin/bash + +while true; + do dd if=/dev/zero of=/dev/null +done & + +while true; + do echo -e "HTTP/1.1 200 OK\n\n$(top -bn1)" \ + | nc -l -k -p 8080 -q 1; +done +
    +
    +

    dd has been added for a purpose which you will see in further sections.
    +Consider the web server as a means of remotely viewing the resource usage of dd.

    +
    +
  • +
  • Save the script and set execute permission.
    $ sudo chmod +x /usr/bin/script.sh 
    +
    +
  • +

You have to manually run this script whenever the system is restarted. We can solve this by creating a systemd service for it. This will give us more options for managing the script’s execution.

Task 2: Create a systemd file

Next, create a systemd service file for the script on your system. This file must have .service extension and saved under the /lib/systemd/system/ directory

$ sudo vim /lib/systemd/system/shellscript.service 
+

Now, add the following content and update the script filename and location. You can also change the description of the service. After that, save the file and close it.

+ + + + + + +
[Unit] +Description=My custom web service to show system processes + +[Service] +ExecStart=/usr/bin/script.sh + +[Install] +WantedBy=multi-user.target +
+

The [Unit] section describes the service, specifies the ordering dependencies, as well as conflicting units. In [Service], a sequence of custom scripts is specified to be executed during unit activation, on stop, and on reload. Finally, the [Install] section lists units that depend on the service.

+

Task 3: Enable the service

    +
  • Your systemd service has been added to your system. Let’s reload the systemctl daemon to read new files. You need to reload the configuration file of a unit each time after making changes in any *.service file.
    $ sudo systemctl daemon-reload 
    +
    +
  • +
  • Enable the service to start on system boot and also start the service using the following commands.
    $ sudo systemctl enable shellscript.service 
    +$ sudo systemctl start shellscript.service 
    +
    +
    +

    When you enable a service, a symbolic link of that service is created in the /etc/systemd/system/multi-user.target.wants directory.

    +
    +
  • +
  • Finally, verify that the script is up and running as a systemd service.
    $ sudo systemctl status shellscript.service 
    +
    +
  • +
  • Example output when we access the web service via port 8080 is shown below.
    +
  • +
  • The dd process shown in the output was executed by the service we just created. This process is using 100% of the CPU. We can fix this problem by making use of systemd control groups.
  • +

Task 4: Systemd control groups (cgroups)

Systemd control group is a mechanism that allows you to control the use of system resources by a group of processes.

    +
  • +

    You can view the cgroup of a service with the systemctl status <service_name> command.
    +Let’s view the status of our custom system service shellscript.service again.

    +
    $ systemctl status shellscript.service
    +
    +

    You get an output similar to the following

    +
    ● shellscript.service - My Shell Script
    +   Loaded: loaded (/lib/systemd/system/shellscript.service; enabled; vendor preset: enabled)
    +   Active: active (running) since Sun 2022-10-23 19:39:02 +04; 1s ago
    + Main PID: 6821 (script.sh)
    +    Tasks: 4 (limit: 9415)
    +   Memory: 1.0M
    +      CPU: 1.945s
    +   CGroup: /system.slice/shellscript.service
    +           ├─6821 /bin/bash /usr/bin/script.sh
    +           ├─6822 /bin/bash /usr/bin/script.sh
    +           ├─6824 nc -l -k -p 8080 -q 1
    +           └─6825 dd if=/dev/zero of=/dev/null
    +
    +

    The output shows that shellscript.service is under the system.slice control group.
    +The second line 6821 /bin/bash /usr/bin/script.sh shows the process ID and the command used to start shellscript.service.
    +Subsequent lines under the CGroup are the other commands executed in the service.

    +
  • +
  • +

    View your system’s cgroup hierarchy

    +
    $ systemctl status
    +
    +
  • +
  • +

    You can view the system resource usage by each cgroup

    +
    $ systemd-cgtop
    +
    +
  • +
  • +

    Slices allow one to create a hierarchical structure in which relative shares of resources are defined for the entities that belong to those slices.
    +View a list of all systemd slices

    +
    $ systemctl -t slice --all
    +
    +
  • +
  • +

    Create a systemd slice at /etc/systemd/system/testslice.slice. Add the following to the file.

    +
    + + + + + + + +
    [Unit] +Description=Custom systemd slice for SNA lab on systemd. +Before=slices.target + +[Slice] +MemoryAccounting=true +CPUAccounting=true +MemoryMax=10% +CPUQuota=10% +
    +

    This slice will set CPU and memory usage limit to all processes running under it. As seen in the configuration, a maximum of 10% of memory and CPU resources can be used by the processes running under this control group testslice.slice.

    +
  • +
  • +

    Let’s add our custom service shellscript.serviceto this new control group.
    +Modify the service file /lib/systemd/system/shellscript.service to use this systemd slice by adding the line Slice=testslice.slice to the [Service] section as shown below:

    +
    + + + + + + + +
    [Unit] +Description=My custom web service to show system processes + +[Service] +ExecStart=/usr/bin/script.sh +Slice=testslice.slice + +[Install] +WantedBy=multi-user.target +
    +
  • +
  • +

    Reload the daemon and restart the service to apply the changes

    +
    $ sudo systemctl daemon-reload
    +$ sudo systemctl restart shellscript.service
    +
    +
  • +
  • +

    Refresh the web page. The output shows that dd now uses less than 10% of the CPU.
    +

    +
  • +
  • +

    Run $ systemd-cgtop to view resource usage by cgroups.

    +
    Control Group                            Tasks   %CPU   Memory  Input/s Output/s
    +/                                          774   13,9     1.7G        -        -
    +testslice.slice                              2    9,9   956.0K        -        -
    +testslice.slice/shellscript.service          2    9,9   576.0K        -        -
    +user.slice                                 489    3,1     1.5G        -        -
    +user.slice/user-1000.slice                 489    3,4     1.3G        -        -
    +
    +
  • +
  • +

    View the hierarchy and other artifacts about the cgroup.

    +
    $ systemctl -t slice --all
    +$ systemctl status
    +
    +
  • +
  • +

    View systemd log for your service

    +
    $ journalctl -u shellscript.service
    +
    +
  • +

Questions to answer

+

Upload the scripts you create to moodle.
+Show test results of your implementation in your report.

+
    +
  1. +

    Show the following boot-up performance statistics on your system:

    +
      +
    • Time spent in the kernel space before the user space was reached.
    • +
    • Show an SVG image that contains services that have been started, and how long it took for them to initialize.
    • +
    +
  2. +
  3. +

    Take the systemd unit graphical.target as your starting point, start tracing backwards using only the Requires variable. At what systemd unit do you reach a dead end where there is no more Requires variable?

    +
      +
    • Provide brief explanation for each of the systemd units you encounter while performing this trace.
    • +
    • The unit at this dead end Wants some systemd units. Why does it want these units?
    • +
    +
    +

    Show screenshots of every step as you trace.

    +
    +
  4. +
  5. +

    Create a simple web server in bash that shows the following: system uptime, inode usage, current memory, disk space usage statistics, and the last 15 lines of /var/log/syslog.

    +
      +
    • The required information should be queried from the server everytime a user opens or refreshes the page.
    • +
    • You do not need to save the results anywhere. Users only need live updates when the server is visited.
    • +
    • The results should be displayed on a single page in an orderly manner that is easy to read.
    • +
    • Create a systemd service on your system to run this script (web server). Show how you can start your new service, and configure it to run after system reboot.
    • +
    • Your systemd service should restart the web server if the web server crashes or is killed.
    • +
    • This service is allowed to use a maximum of 15% of the CPU and 256MB memory.
    • +
    +
    +

    Show all steps taken, and all unit files created in your report.
    +At the end of this task, you must have at least one bash script, one service file, and one slice file all working together to achieve the objectives.

    +
    +
  6. +
  7. +

    Create a systemd service that will update your package sources list from the repository.

    +
      +
    • The service should update the package source list five minutes after booting, and then every day after that.
    • +
    • The schedule of the execution should be done with only systemd.
    • +
    +
  8. +

Bonus

    +
  1. Create a custom target in /etc/systemd/system/<your_target>.target. +
      +
    • Add a description of the target file.
    • +
    • Create a directory /etc/systemd/system/<your_target>.wants/
    • +
    • Create sylinks to additional services you wish to enable in this new directory. It should be a symlink to services from /usr/lib/systemd/system/ that you wish to enable.
    • +
    +
  2. +
+ + + + + + + + + diff --git a/week8/systemd.svg b/week8/systemd.svg new file mode 100644 index 0000000..ad44cef --- /dev/null +++ b/week8/systemd.svg @@ -0,0 +1,1784 @@ + + + + + + + + + + + + + + + + + +Startup finished in 5.093s (firmware) + 122ms (loader) + 7.459s (kernel) + 4.545s (userspace) = 17.221s +graphical.target reached after 4.544s in userspace.Arch Linux akemi (Linux 6.2.2-arch2-1 #1 SMP PREEMPT_DYNAMIC Wed, 08 Mar 2023 04:07:29 +0000) x86-64 + + + + + -5.0s + + + + + + + + + + + -4.0s + + + + + + + + + + + -3.0s + + + + + + + + + + + -2.0s + + + + + + + + + + + -1.0s + + + + + + + + + + + 0.0s + + + + + + + + + + + 1.0s + + + + + + + + + + + 2.0s + + + + + + + + + + + 3.0s + + + + + + + + + + + 4.0s + + + + + + + + + + + 5.0s + + + + + + + + + + + 6.0s + + + + + + + + + + + 7.0s + + + + + + + + + + + 8.0s + + + + + + + + + + + 9.0s + + + + + + + + + + + 10.0s + + + + + + + + + + + 11.0s + + + + + + + + + + + 12.0s + + firmware + + loader + + kernel + + + + + systemd + + + + system.slice + + + + -.slice + + + + init.scope + + + + -.mount + + + + dev-sdb3.device (950ms) + + + + machine.slice + + + + system-getty.slice + + + + system-modprobe.slice + + + + system-systemd\x2dfsck.slice + + + + user.slice + + + + systemd-ask-password-console.path + + + + systemd-ask-password-wall.path + + + + proc-sys-fs-binfmt_misc.automount + + + + cryptsetup.target + + + + integritysetup.target + + + + slices.target + + + + veritysetup.target + + + + dm-event.socket + + + + lvm2-lvmpolld.socket + + + + systemd-coredump.socket + + + + systemd-journald-dev-log.socket + + + + systemd-journald.socket + + + + systemd-udevd-control.socket + + + + systemd-udevd-kernel.socket + + + + dev-hugepages.mount (25ms) + + + + dev-mqueue.mount (24ms) + + + + sys-kernel-debug.mount (23ms) + + + + sys-kernel-tracing.mount (22ms) + + + + kmod-static-nodes.service (53ms) + + + + lvm2-monitor.service (629ms) + + + + modprobe@configfs.service (54ms) + + + + modprobe@drm.service (155ms) + + + + modprobe@fuse.service (175ms) + + + + systemd-journald.service (116ms) + + + + systemd-modules-load.service (232ms) + + + + systemd-remount-fs.service (280ms) + + + + systemd-udev-trigger.service (346ms) + + + + sys-fs-fuse-connections.mount (22ms) + + + + sys-kernel-config.mount (20ms) + + + + systemd-journal-flush.service (165ms) + + + + systemd-random-seed.service (63ms) + + + + systemd-sysctl.service (27ms) + + + + systemd-tmpfiles-setup-dev.service (100ms) + + + + run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount + + + + run-credentials-systemd\x2dsysctl.service.mount + + + + systemd-udevd.service (126ms) + + + + local-fs-pre.target + + + + mnt-data.automount + + + + mnt-rec.automount + + + + mnt-win.automount + + + + sys-module-fuse.device + + + + sys-module-configfs.device + + + + dev-ttyS13.device + + + + sys-devices-platform-serial8250-tty-ttyS13.device + + + + sys-devices-platform-serial8250-tty-ttyS15.device + + + + dev-ttyS15.device + + + + sys-devices-platform-serial8250-tty-ttyS14.device + + + + dev-ttyS14.device + + + + dev-ttyS10.device + + + + sys-devices-platform-serial8250-tty-ttyS10.device + + + + sys-devices-platform-serial8250-tty-ttyS12.device + + + + dev-ttyS12.device + + + + sys-devices-platform-serial8250-tty-ttyS1.device + + + + dev-ttyS1.device + + + + sys-devices-platform-serial8250-tty-ttyS0.device + + + + dev-ttyS0.device + + + + dev-ttyS11.device + + + + sys-devices-platform-serial8250-tty-ttyS11.device + + + + dev-ttyS16.device + + + + sys-devices-platform-serial8250-tty-ttyS16.device + + + + sys-devices-platform-serial8250-tty-ttyS17.device + + + + dev-ttyS17.device + + + + dev-ttyS18.device + + + + sys-devices-platform-serial8250-tty-ttyS18.device + + + + sys-devices-platform-serial8250-tty-ttyS2.device + + + + dev-ttyS2.device + + + + sys-devices-platform-serial8250-tty-ttyS19.device + + + + dev-ttyS19.device + + + + sys-devices-platform-serial8250-tty-ttyS20.device + + + + dev-ttyS20.device + + + + sys-devices-platform-serial8250-tty-ttyS22.device + + + + dev-ttyS22.device + + + + sys-devices-platform-serial8250-tty-ttyS23.device + + + + dev-ttyS23.device + + + + sys-devices-platform-serial8250-tty-ttyS21.device + + + + dev-ttyS21.device + + + + sys-devices-platform-serial8250-tty-ttyS24.device + + + + dev-ttyS24.device + + + + sys-devices-platform-serial8250-tty-ttyS25.device + + + + dev-ttyS25.device + + + + dev-ttyS26.device + + + + sys-devices-platform-serial8250-tty-ttyS26.device + + + + sys-devices-platform-serial8250-tty-ttyS27.device + + + + dev-ttyS27.device + + + + sys-devices-platform-serial8250-tty-ttyS28.device + + + + dev-ttyS28.device + + + + sys-devices-platform-serial8250-tty-ttyS30.device + + + + dev-ttyS30.device + + + + dev-ttyS31.device + + + + sys-devices-platform-serial8250-tty-ttyS31.device + + + + dev-ttyS4.device + + + + sys-devices-platform-serial8250-tty-ttyS4.device + + + + dev-ttyS29.device + + + + sys-devices-platform-serial8250-tty-ttyS29.device + + + + sys-devices-platform-serial8250-tty-ttyS6.device + + + + dev-ttyS6.device + + + + dev-ttyS5.device + + + + sys-devices-platform-serial8250-tty-ttyS5.device + + + + dev-ttyS7.device + + + + sys-devices-platform-serial8250-tty-ttyS7.device + + + + dev-ttyS3.device + + + + sys-devices-platform-serial8250-tty-ttyS3.device + + + + sys-devices-platform-serial8250-tty-ttyS8.device + + + + dev-ttyS8.device + + + + sys-devices-platform-serial8250-tty-ttyS9.device + + + + dev-ttyS9.device + + + + dev-tpmrm0.device + + + + sys-devices-LNXSYSTM:00-LNXSYBUS:00-MSFT0101:00-tpmrm-tpmrm0.device + + + + dev-disk-by\x2did-wwn\x2d0x5002538e30a01dd6.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2.0.device + + + + dev-disk-by\x2did-ata\x2dSamsung_SSD_860_EVO_M.2_500GB_S5GCNJ0NA07635E.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata2-host1-target1:0:0-1:0:0:0-block-sdb.device + + + + dev-sdb.device + + + + dev-disk-by\x2ddiskseq-2.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata2-host1-target1:0:0-1:0:0:0-block-sdb-sdb1.device + + + + dev-disk-by\x2did-ata\x2dSamsung_SSD_860_EVO_M.2_500GB_S5GCNJ0NA07635E\x2dpart1.device + + + + dev-disk-by\x2did-wwn\x2d0x5002538e30a01dd6\x2dpart1.device + + + + dev-disk-by\x2dpartlabel-Microsoft\x5cx20reserved\x5cx20partition.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2.0\x2dpart1.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2\x2dpart1.device + + + + dev-sdb1.device + + + + dev-disk-by\x2dpartuuid-ca52197b\x2dc78b\x2d4296\x2d9072\x2dbb26b4c0378d.device + + + + dev-disk-by\x2ddiskseq-2\x2dpart1.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata1-host0-target0:0:0-0:0:0:0-block-sda.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1.0.device + + + + dev-disk-by\x2ddiskseq-1.device + + + + dev-disk-by\x2did-ata\x2dWDC_WD10SPZX\x2d21Z10T0_WD\x2dWXK1A78NU0TK.device + + + + dev-disk-by\x2did-wwn\x2d0x50014ee65e358a53.device + + + + dev-sda.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1.device + + + + dev-disk-by\x2ddiskseq-2\x2dpart5.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata2-host1-target1:0:0-1:0:0:0-block-sdb-sdb5.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2\x2dpart5.device + + + + dev-disk-by\x2duuid-d572b27c\x2d4df3\x2d47fa\x2dbaf0\x2d29e6c488b3d9.device + + + + dev-sdb5.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2.0\x2dpart5.device + + + + dev-disk-by\x2dpartuuid-f3701161\x2d5e56\x2de047\x2dabd1\x2d9c684120d4ad.device + + + + dev-disk-by\x2did-ata\x2dSamsung_SSD_860_EVO_M.2_500GB_S5GCNJ0NA07635E\x2dpart5.device + + + + dev-disk-by\x2did-wwn\x2d0x5002538e30a01dd6\x2dpart5.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata2-host1-target1:0:0-1:0:0:0-block-sdb-sdb7.device + + + + dev-disk-by\x2dpartuuid-65e30873\x2d3437\x2ddf49\x2d9fab\x2da32cf70dd582.device + + + + dev-sdb7.device + + + + dev-disk-by\x2duuid-23dba63c\x2d4f58\x2d4ec4\x2db412\x2d07404d484796.device + + + + dev-disk-by\x2did-ata\x2dSamsung_SSD_860_EVO_M.2_500GB_S5GCNJ0NA07635E\x2dpart7.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2\x2dpart7.device + + + + dev-disk-by\x2ddiskseq-2\x2dpart7.device + + + + dev-disk-by\x2did-wwn\x2d0x5002538e30a01dd6\x2dpart7.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2.0\x2dpart7.device + + + + dev-sdb6.device + + + + dev-disk-by\x2did-wwn\x2d0x5002538e30a01dd6\x2dpart6.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata2-host1-target1:0:0-1:0:0:0-block-sdb-sdb6.device + + + + dev-disk-by\x2duuid-BEAC\x2d6EE0.device + + + + dev-disk-by\x2ddiskseq-2\x2dpart6.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2\x2dpart6.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2.0\x2dpart6.device + + + + dev-disk-by\x2did-ata\x2dSamsung_SSD_860_EVO_M.2_500GB_S5GCNJ0NA07635E\x2dpart6.device + + + + dev-disk-by\x2dpartuuid-1d75b784\x2def17\x2d3c4d\x2d9e74\x2d0b058e17bf83.device + + + + dev-sda2.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1\x2dpart2.device + + + + dev-disk-by\x2duuid-A88E\x2d44D7.device + + + + dev-disk-by\x2did-ata\x2dWDC_WD10SPZX\x2d21Z10T0_WD\x2dWXK1A78NU0TK\x2dpart2.device + + + + dev-disk-by\x2ddiskseq-1\x2dpart2.device + + + + dev-disk-by\x2did-wwn\x2d0x50014ee65e358a53\x2dpart2.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda2.device + + + + dev-disk-by\x2dpartlabel-EF.device + + + + dev-disk-by\x2dpartuuid-fd78fa39\x2d49e9\x2d4f15\x2d978d\x2d05e8bc17d8cc.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1.0\x2dpart2.device + + + + dev-disk-by\x2dpartuuid-aaeb10a9\x2d1982\x2d4853\x2db82d\x2d8abafe5c6855.device + + + + dev-disk-by\x2did-ata\x2dWDC_WD10SPZX\x2d21Z10T0_WD\x2dWXK1A78NU0TK\x2dpart1.device + + + + dev-sda1.device + + + + dev-disk-by\x2duuid-20568C73568C4C0A.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1\x2dpart1.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda1.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1.0\x2dpart1.device + + + + dev-disk-by\x2ddiskseq-1\x2dpart1.device + + + + dev-disk-by\x2dpartlabel-Ba.device + + + + dev-disk-by\x2did-wwn\x2d0x50014ee65e358a53\x2dpart1.device + + + + dev-disk-by\x2dlabel-\xd0\x92\xd0\xbe\xd1\x81\xd1\x81\xd1\x82\xd0\xb0\xd0\xbd\xd0\xbe\xd0\xb2\xd0\xb8\xd1\x82\xd1\x8c.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata2-host1-target1:0:0-1:0:0:0-block-sdb-sdb8.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2\x2dpart8.device + + + + dev-disk-by\x2duuid-0A1E\x2d34DD.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2.0\x2dpart8.device + + + + dev-disk-by\x2dpartuuid-e818794d\x2da8a4\x2d9442\x2d9fc2\x2d8cd670a703b0.device + + + + dev-disk-by\x2did-ata\x2dSamsung_SSD_860_EVO_M.2_500GB_S5GCNJ0NA07635E\x2dpart8.device + + + + dev-disk-by\x2ddiskseq-2\x2dpart8.device + + + + dev-sdb8.device + + + + dev-disk-by\x2did-wwn\x2d0x5002538e30a01dd6\x2dpart8.device + + + + dev-disk-by\x2did-wwn\x2d0x5002538e30a01dd6\x2dpart2.device + + + + dev-disk-by\x2did-ata\x2dSamsung_SSD_860_EVO_M.2_500GB_S5GCNJ0NA07635E\x2dpart2.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2.0\x2dpart2.device + + + + dev-disk-by\x2dlabel-System.device + + + + dev-sdb2.device + + + + dev-disk-by\x2duuid-3430973F30970750.device + + + + dev-disk-by\x2dpartuuid-8d2a4269\x2d4c45\x2d479d\x2dbcb4\x2d97584d961817.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2\x2dpart2.device + + + + dev-disk-by\x2dpartlabel-Basic\x5cx20data\x5cx20partition.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata2-host1-target1:0:0-1:0:0:0-block-sdb-sdb2.device + + + + dev-disk-by\x2ddiskseq-2\x2dpart2.device + + + + dev-disk-by\x2did-ata\x2dWDC_WD10SPZX\x2d21Z10T0_WD\x2dWXK1A78NU0TK\x2dpart3.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1\x2dpart3.device + + + + dev-disk-by\x2dpartlabel-Mi.device + + + + dev-disk-by\x2ddiskseq-1\x2dpart3.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda3.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1.0\x2dpart3.device + + + + dev-disk-by\x2dpartuuid-273f57b1\x2dd25a\x2d4384\x2d8189\x2d0672f924abca.device + + + + dev-disk-by\x2did-wwn\x2d0x50014ee65e358a53\x2dpart3.device + + + + dev-sda3.device + + + + dev-tpm0.device + + + + sys-devices-LNXSYSTM:00-LNXSYBUS:00-MSFT0101:00-tpm-tpm0.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2\x2dpart3.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2.0\x2dpart3.device + + + + dev-disk-by\x2did-ata\x2dSamsung_SSD_860_EVO_M.2_500GB_S5GCNJ0NA07635E\x2dpart3.device + + + + dev-disk-by\x2dpartuuid-fbef9613\x2dfbf5\x2d8445\x2d8d1c\x2d7a63709d1229.device + + + + dev-disk-by\x2did-wwn\x2d0x5002538e30a01dd6\x2dpart3.device + + + + dev-disk-by\x2duuid-d3bdbd34\x2dc95a\x2d4d56\x2d86b7\x2db3b9e3f37e20.device + + + + dev-disk-by\x2ddiskseq-2\x2dpart3.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata2-host1-target1:0:0-1:0:0:0-block-sdb-sdb3.device + + + + dev-disk-by\x2ddiskseq-1\x2dpart4.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda4.device + + + + dev-disk-by\x2did-ata\x2dWDC_WD10SPZX\x2d21Z10T0_WD\x2dWXK1A78NU0TK\x2dpart4.device + + + + dev-disk-by\x2duuid-a07fd7e9\x2d3fc2\x2d4053\x2d8d1c\x2d1dfbbcfe88e1.device + + + + dev-disk-by\x2dpartuuid-235de465\x2dbb37\x2d4c98\x2d9152\x2ddeaaf38e4360.device + + + + dev-disk-by\x2did-wwn\x2d0x50014ee65e358a53\x2dpart4.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1\x2dpart4.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1.0\x2dpart4.device + + + + dev-sda4.device + + + + dev-disk-by\x2did-ata\x2dWDC_WD10SPZX\x2d21Z10T0_WD\x2dWXK1A78NU0TK\x2dpart6.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1.0\x2dpart6.device + + + + dev-disk-by\x2dpartlabel-EFI\x5cx20system\x5cx20partition.device + + + + dev-sda6.device + + + + dev-disk-by\x2did-wwn\x2d0x50014ee65e358a53\x2dpart6.device + + + + dev-disk-by\x2ddiskseq-1\x2dpart6.device + + + + dev-disk-by\x2dpartuuid-dff97c69\x2df3f3\x2d4482\x2db53d\x2dec6e756ef166.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda6.device + + + + dev-disk-by\x2duuid-5E8A\x2dC8FC.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1\x2dpart6.device + + + + dev-disk-by\x2did-wwn\x2d0x5002538e30a01dd6\x2dpart4.device + + + + dev-disk-by\x2duuid-2472195f\x2dc433\x2d49e6\x2da055\x2dbd9041bef8b4.device + + + + dev-disk-by\x2dpartuuid-09b175d8\x2db85d\x2dc64d\x2d916c\x2dd816ff360ae4.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata2-host1-target1:0:0-1:0:0:0-block-sdb-sdb4.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2.0\x2dpart4.device + + + + dev-disk-by\x2ddiskseq-2\x2dpart4.device + + + + dev-disk-by\x2did-ata\x2dSamsung_SSD_860_EVO_M.2_500GB_S5GCNJ0NA07635E\x2dpart4.device + + + + dev-sdb4.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2\x2dpart4.device + + + + dev-disk-by\x2dpartuuid-8fc5f894\x2de950\x2d4e10\x2db725\x2d5f9d3c8ad32a.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1\x2dpart5.device + + + + dev-sda5.device + + + + dev-disk-by\x2did-ata\x2dWDC_WD10SPZX\x2d21Z10T0_WD\x2dWXK1A78NU0TK\x2dpart5.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda5.device + + + + dev-disk-by\x2duuid-9b6b4ce7\x2d3c67\x2d4b96\x2d9e7a\x2d9f7cadfb926c.device + + + + dev-disk-by\x2did-wwn\x2d0x50014ee65e358a53\x2dpart5.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1.0\x2dpart5.device + + + + dev-disk-by\x2ddiskseq-1\x2dpart5.device + + + + dev-disk-by\x2did-wwn\x2d0x50014ee65e358a53\x2dpart7.device + + + + dev-disk-by\x2dpartuuid-7fa89793\x2dd08e\x2d4c9c\x2d950b\x2d0594c776ed03.device + + + + dev-disk-by\x2ddiskseq-1\x2dpart7.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1.0\x2dpart7.device + + + + dev-disk-by\x2did-ata\x2dWDC_WD10SPZX\x2d21Z10T0_WD\x2dWXK1A78NU0TK\x2dpart7.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda7.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1\x2dpart7.device + + + + dev-sda7.device + + + + dev-disk-by\x2duuid-DFA7\x2dCDDC.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1\x2dpart9.device + + + + dev-disk-by\x2did-ata\x2dWDC_WD10SPZX\x2d21Z10T0_WD\x2dWXK1A78NU0TK\x2dpart9.device + + + + dev-sda9.device + + + + dev-disk-by\x2duuid-5df6502a\x2d3836\x2d41e5\x2d9606\x2dfdb5b41a7aea.device + + + + dev-disk-by\x2ddiskseq-1\x2dpart9.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1.0\x2dpart9.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda9.device + + + + dev-disk-by\x2dpartuuid-605cd04f\x2d5ab3\x2d43a7\x2d9330\x2d790e672edf99.device + + + + dev-disk-by\x2did-wwn\x2d0x50014ee65e358a53\x2dpart9.device + + + + dev-disk-by\x2dpartuuid-888bbadc\x2d63dc\x2d4c35\x2dad02\x2dfe7c476664a3.device + + + + dev-disk-by\x2duuid-7A54DAE754DAA563.device + + + + dev-disk-by\x2dlabel-REC.device + + + + dev-disk-by\x2did-ata\x2dWDC_WD10SPZX\x2d21Z10T0_WD\x2dWXK1A78NU0TK\x2dpart8.device + + + + dev-disk-by\x2did-wwn\x2d0x50014ee65e358a53\x2dpart8.device + + + + dev-disk-by\x2ddiskseq-1\x2dpart8.device + + + + sys-devices-pci0000:00-0000:00:08.2-0000:05:00.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda8.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1\x2dpart8.device + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d1.0\x2dpart8.device + + + + dev-sda8.device + + + + sys-devices-pci0000:00-0000:00:08.1-0000:04:00.0-backlight-amdgpu_bl1.device + + + + sys-devices-virtual-misc-rfkill.device + + + + dev-rfkill.device + + + + system-systemd\x2dbacklight.slice + + + + dev-disk-by\x2duuid-d572b27c\x2d4df3\x2d47fa\x2dbaf0\x2d29e6c488b3d9.swap (12ms) + + + + systemd-backlight@backlight:amdgpu_bl1.service (117ms) + + + + systemd-fsck@dev-disk-by\x2duuid-2472195f\x2dc433\x2d49e6\x2da055\x2dbd9041bef8b4.service (175ms) + + + + systemd-fsck@dev-disk-by\x2duuid-BEAC\x2d6EE0.service (256ms) + + + + dev-disk-by\x2ddiskseq-2\x2dpart5.swap + + + + dev-disk-by\x2dpartuuid-f3701161\x2d5e56\x2de047\x2dabd1\x2d9c684120d4ad.swap + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2.0\x2dpart5.swap + + + + dev-disk-by\x2did-wwn\x2d0x5002538e30a01dd6\x2dpart5.swap + + + + dev-disk-by\x2dpath-pci\x2d0000:05:00.0\x2data\x2d2\x2dpart5.swap + + + + dev-disk-by\x2did-ata\x2dSamsung_SSD_860_EVO_M.2_500GB_S5GCNJ0NA07635E\x2dpart5.swap + + + + dev-sdb5.swap + + + + swap.target + + + + tmp.mount (178ms) + + + + boot.mount (39ms) + + + + home.mount (11ms) + + + + sys-devices-pci0000:00-0000:00:01.6-0000:02:00.1-net-enp2s0f1.device + + + + sys-subsystem-net-devices-enp2s0f1.device + + + + sys-devices-pci0000:00-0000:00:08.1-0000:04:00.3-usb2-2\x2d4-2\x2d4:1.0-bluetooth-hci0.device + + + + sys-subsystem-bluetooth-devices-hci0.device + + + + bluetooth.target + + + + sys-devices-pci0000:00-0000:00:08.1-0000:04:00.1-sound-card0-controlC0.device + + + + dev-snd-by\x2dpath-pci\x2d0000:04:00.1.device + + + + dev-snd-controlC0.device + + + + local-fs.target + + + + systemd-tmpfiles-setup.service (285ms) + + + + run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount + + + + sys-devices-pci0000:00-0000:00:08.1-0000:04:00.6-sound-card1-controlC1.device + + + + dev-snd-by\x2dpath-pci\x2d0000:04:00.6.device + + + + dev-snd-controlC1.device + + + + modprobe@dm_mod.service (4ms) + + + + modprobe@loop.service (35ms) + + + + systemd-update-utmp.service (59ms) + + + + sysinit.target + + + + cups.path + + + + archlinux-keyring-wkd-sync.timer + + + + fstrim.timer + + + + man-db.timer + + + + shadow.timer + + + + systemd-tmpfiles-clean.timer + + + + paths.target + + + + timers.target + + + + cups.socket + + + + dbus.socket + + + + docker.socket (21ms) + + + + libvirtd.socket + + + + libvirtd-admin.socket + + + + libvirtd-ro.socket + + + + saned.socket + + + + virtlockd.socket + + + + virtlogd.socket + + + + sockets.target + + + + basic.target + + + + alsa-restore.service (56ms) + + + + cpupower-gui.service (842ms) + + + + cronie.service + + + + dbus.service (43ms) + + + + systemd-logind.service (35ms) + + + + systemd-machined.service (42ms) + + + + shadow.service + + + + sound.target + + + + NetworkManager.service (56ms) + + + + wpa_supplicant.service (18ms) + + + + network.target + + + + network-online.target + + + + containerd.service (306ms) + + + + cups.service (440ms) + + + + libvirtd.service (185ms) + + + + postgresql.service (350ms) + + + + systemd-user-sessions.service (40ms) + + + + sys-devices-pci0000:00-0000:00:01.7-0000:03:00.0-net-wlp3s0.device + + + + sys-subsystem-net-devices-wlp3s0.device + + + + cpupower-gui-helper.service (222ms) + + + + getty@tty1.service + + + + getty.target + + + + colord.service (59ms) + + + + docker.service (1.411s) + + + + polkit.service (35ms) + + + + sys-subsystem-net-devices-virbr0.device + + + + sys-devices-virtual-net-virbr0.device + + + + sys-devices-virtual-net-docker0.device + + + + sys-subsystem-net-devices-docker0.device + + + + multi-user.target + + + + graphical.target + + + + Activating + + Active + + Deactivating + + Setting up security module + + Generators + + Loading unit files + + + diff --git a/week8/update.sh b/week8/update.sh new file mode 100644 index 0000000..a4ec5e9 --- /dev/null +++ b/week8/update.sh @@ -0,0 +1,3 @@ +#!/bin/bash + +pacman -Sy diff --git a/week8/webserver.sh b/week8/webserver.sh new file mode 100755 index 0000000..aef3e62 --- /dev/null +++ b/week8/webserver.sh @@ -0,0 +1,8 @@ +#!/bin/bash + +while true ; do + STATS="

Uptime

$(uptime)\n

Inode and disk usage

$(df -ih)\n

Mem usage

$(free -h)\n

Syslog

$(tail -n 15 /var/log/syslog)\r\n\r\n" + LEN=$(printf "%s" "$STATS" | wc -c) + RES="HTTP/1.1 200OK\r\nContent-Length: $LEN\r\n\r\n" + echo -e "$RES$STATS"| nc -l -p 1500; +done diff --git a/week9/control b/week9/control new file mode 100644 index 0000000..f855ce8 --- /dev/null +++ b/week9/control @@ -0,0 +1,6 @@ +Package: hellopackage +Version: 1.0 +Architecture: all +Maintainer: RinRi +Depends: python3 +Description: Hello world diff --git a/week9/create-package.sh b/week9/create-package.sh new file mode 100755 index 0000000..4f61edd --- /dev/null +++ b/week9/create-package.sh @@ -0,0 +1,11 @@ +#!/bin/bash + +rm -rf hellopackage +mkdir -p hellopackage hellopackage/usr/local/bin hellopackage/var/helloworld +printf "%s" '#!/usr/bin/env python3\nprint("Hello, World!")' > hellopackage/var/helloworld/helloworld.py +printf "%s" '#!/bin/bash\n/var/helloworld/helloworld.py' > hellopackage/usr/local/bin/helloworld +chmod -R 0755 hellopackage/var/helloworld hellopackage/usr/local/bin/helloworld + +mkdir -p hellopackage/DEBIAN +cp control hellopackage/DEBIAN/ +dpkg-deb --build --root-owner-group hellopackage diff --git a/week9/hellopackage/DEBIAN/control b/week9/hellopackage/DEBIAN/control new file mode 100644 index 0000000..f855ce8 --- /dev/null +++ b/week9/hellopackage/DEBIAN/control @@ -0,0 +1,6 @@ +Package: hellopackage +Version: 1.0 +Architecture: all +Maintainer: RinRi +Depends: python3 +Description: Hello world diff --git a/week9/hellopackage/usr/local/bin/helloworld b/week9/hellopackage/usr/local/bin/helloworld new file mode 100755 index 0000000..6901f80 --- /dev/null +++ b/week9/hellopackage/usr/local/bin/helloworld @@ -0,0 +1 @@ +#!/bin/bash\n/var/helloworld/helloworld.py \ No newline at end of file diff --git a/week9/hellopackage/var/helloworld/helloworld.py b/week9/hellopackage/var/helloworld/helloworld.py new file mode 100755 index 0000000..4ea999b --- /dev/null +++ b/week9/hellopackage/var/helloworld/helloworld.py @@ -0,0 +1 @@ +#!/usr/bin/env python3\nprint("Hello, World!") \ No newline at end of file diff --git a/week9/lab9-image-01.jpg b/week9/lab9-image-01.jpg new file mode 100644 index 0000000..85022ca Binary files /dev/null and b/week9/lab9-image-01.jpg differ diff --git a/week9/lab9-solution.html b/week9/lab9-solution.html new file mode 100644 index 0000000..6bd0599 --- /dev/null +++ b/week9/lab9-solution.html @@ -0,0 +1,360 @@ + + + + + + + +Lab9 Solution Amirlan Sharipov (BS21-CS-01) + + + + + +
+

Lab9 Solution Amirlan Sharipov (BS21-CS-01)

+ + +
+

1. Question 1

+
+

+One of the alternative is to use GPS-based ntp and sync time with the satellites. +

+
+
+ +
+

2. Question 2

+
+

+Use one server as an ntp client and another as an ntp server and sync them regularly using cron. +

+
+
+ +
+

3. Question 3

+
+

+apt is a new command-line interface aimer for interactive usage. +apt is a high-level tool to interact with tools like apt-get and apt-cache. +

+
+
+ +
+

4. Question 4

+
+

+upgrade only upgrades the packages and never removes them, whereas full-upgrade may result in removal of some packages. This may cause problems for system administrators. +

+
+
+ +
+

5. Question 5

+
+

+The information is taken from https://linuxhint.com/install-atom-text-editor-ubuntu-22-04/ +I am very skeptical about this method. Atom is deprecated. As well as apt-key. But it works +

+
+ +
+

5.1. Add gpg atom’s gpg keys

+
+

+wget -qO - https://packagecloud.io/AtomEditor/atom/gpgkey | sudo apt-key add - +

+
+
+
+

5.2. Add atom’s repository to sources list

+
+

+sudo sh -c ’echo “deb [arch=amd64] https://packagecloud.io/AtomEditor/atom/any/ any main” > /etc/apt/sources.list.d/atom.list’ +

+
+
+
+

5.3. Dowload packages information from all sources

+
+

+sudo apt update +

+
+
+
+

5.4. Search for atom

+
+

+apt search atom +There was a huge output with atom in it. +

+
+
+
+

5.5. Finally install atom

+
+

+sudo apt install atom +

+
+
+
+ +
+

6. Question 6

+
+
+
cat control create-package.sh
+
+
+ +
+Package: hellopackage
+Version: 1.0
+Architecture: all
+Maintainer: RinRi
+Depends: python3
+Description: Hello world
+#!/bin/bash
+
+rm -rf hellopackage
+mkdir -p hellopackage hellopackage/usr/local/bin hellopackage/var/helloworld
+printf "%s" '#!/usr/bin/env python3\nprint("Hello, World!")' > hellopackage/var/helloworld/helloworld.py
+printf "%s" '#!/bin/bash\n/var/helloworld/helloworld.py' > hellopackage/usr/local/bin/helloworld
+chmod -R 0755 hellopackage/var/helloworld hellopackage/usr/local/bin/helloworld
+
+mkdir -p hellopackage/DEBIAN
+cp control hellopackage/DEBIAN/
+dpkg-deb --build --root-owner-group hellopackage
+
+ +

+After this just use sudo apt install ./hellopackage.deb and everything works. +

+ + +
+

lab9-image-01.jpg +

+
+ +

+The artifacts created by the package are the same as in the image but from the root directory. +

+
+
+
+
+

Author: Amirlan Sharipov (BS21-CS-01)

+

Created: 2023-03-30 Thu 21:34

+
+ + \ No newline at end of file diff --git a/week9/lab9-solution.org b/week9/lab9-solution.org new file mode 100644 index 0000000..7fd9eb7 --- /dev/null +++ b/week9/lab9-solution.org @@ -0,0 +1,66 @@ +#+title: Lab9 Solution +#+title: Amirlan Sharipov (BS21-CS-01) +#+author: Amirlan Sharipov (BS21-CS-01) +#+PROPERTY: header-args :results verbatim :exports both +#+OPTIONS: ^:nil + +* Question 1 +One of the alternative is to use GPS-based ntp and sync time with the satellites. + +* Question 2 +Use one server as an ntp client and another as an ntp server and sync them regularly using cron. + +* Question 3 +apt is a new command-line interface aimer for interactive usage. +apt is a high-level tool to interact with tools like apt-get and apt-cache. + +* Question 4 +upgrade only upgrades the packages and never removes them, whereas full-upgrade may result in removal of some packages. This may cause problems for system administrators. + +* Question 5 +The information is taken from https://linuxhint.com/install-atom-text-editor-ubuntu-22-04/ +I am very skeptical about this method. Atom is deprecated. As well as apt-key. But it works + +** Add gpg atom's gpg keys +wget -qO - https://packagecloud.io/AtomEditor/atom/gpgkey | sudo apt-key add - +** Add atom's repository to sources list +sudo sh -c 'echo "deb [arch=amd64] https://packagecloud.io/AtomEditor/atom/any/ any main" > /etc/apt/sources.list.d/atom.list' +** Dowload packages information from all sources +sudo apt update +** Search for atom +apt search atom +There was a huge output with atom in it. +** Finally install atom +sudo apt install atom + +* Question 6 +#+begin_src bash +cat control create-package.sh +#+end_src + +#+RESULTS: +#+begin_example +Package: hellopackage +Version: 1.0 +Architecture: all +Maintainer: RinRi +Depends: python3 +Description: Hello world +#!/bin/bash + +rm -rf hellopackage +mkdir -p hellopackage hellopackage/usr/local/bin hellopackage/var/helloworld +printf "%s" '#!/usr/bin/env python3\nprint("Hello, World!")' > hellopackage/var/helloworld/helloworld.py +printf "%s" '#!/bin/bash\n/var/helloworld/helloworld.py' > hellopackage/usr/local/bin/helloworld +chmod -R 0755 hellopackage/var/helloworld hellopackage/usr/local/bin/helloworld + +mkdir -p hellopackage/DEBIAN +cp control hellopackage/DEBIAN/ +dpkg-deb --build --root-owner-group hellopackage +#+end_example + +After this just use sudo apt install ./hellopackage.deb and everything works. + +[[./lab9-image-01.jpg]] + +The artifacts created by the package are the same as in the image but from the root directory. diff --git a/week9/lab9.html b/week9/lab9.html new file mode 100644 index 0000000..94207f0 --- /dev/null +++ b/week9/lab9.html @@ -0,0 +1,748 @@ + + + + + + + + + + + + + Lab 9: System time and Package managers - HackMD + + + + + + + + + + + + + + + + + +

Lab 9: System time and Package managers

Exercise 1: System time

Task 1: Time zone

    +
  • Check the current time zone
    $ timedatectl
    +
    +You should get output similar to the following:
                   Local time: Вс 2022-11-06 12:40:33 +04
    +           Universal time: Вс 2022-11-06 08:40:33 UTC
    +                 RTC time: Вс 2022-11-06 08:40:33
    +                Time zone: Europe/Samara (+04, +0400)
    +System clock synchronized: yes
    +              NTP service: active
    +          RTC in local TZ: no
    +
    +
  • +
  • Assume that we are in Vladivostok and we want to set the timezone to Vladivostok Standard Time (GMT +10). To do this, first get a list of all available time zones.
    $ timedatectl list-timezones
    +
    +You will find the full name for the time zone in Vladivostok from the long list of output:
    Asia/Vladivostok
    +
    +
  • +
  • Now that we have identified the name of the time zone on our system, switch to that with the following command:
    $ sudo timedatectl set-timezone Asia/Vladivostok
    +
    +
  • +
  • Run timedatectl and you should get output different from what we had initially.
    $ timedatectl
    +               Local time: Вс 2022-11-06 18:55:05 +10
    +           Universal time: Вс 2022-11-06 08:55:05 UTC
    +                 RTC time: Вс 2022-11-06 08:55:05
    +                Time zone: Asia/Vladivostok (+10, +1000)
    +System clock synchronized: yes
    +              NTP service: active
    +          RTC in local TZ: no
    +
    +
  • +
  • To see how this affects system logging, restart rsyslog to trigger some system log events.
    $ systemctl restart rsyslog
    +
    +
  • +
  • View the log file /var/log/syslog.
    $ tail -n 15 /var/log/syslog
    +
    +You should see output similar to the following with noticeable change in the timestamp.
    Nov  6 12:55:35 sna-vm systemd[1]: systemd-timedated.service: Deactivated successfully.
    +Nov  6 12:57:44 sna-vm systemd[1606]: Started VTE child process 4376 launched by gnome-terminal-server process 2335.
    +Nov  6 18:57:54 sna-vm systemd[1]: Stopping System Logging Service...
    +Nov  6 18:57:54 sna-vm rsyslogd: [origin software="rsyslogd" swVersion="8.2112.0" x-pid="889" x-info="https://www.rsyslog.com"] exiting on signal 15.
    +Nov  6 18:57:54 sna-vm systemd[1]: rsyslog.service: Deactivated successfully.
    +Nov  6 18:57:54 sna-vm systemd[1]: Stopped System Logging Service.
    +Nov  6 18:57:54 sna-vm systemd[1]: Starting System Logging Service...
    +
    +
  • +
    +
  • You can manually change the time zone with a symlink. The symlink at /etc/localtime points to the time zone that is currently configured.
    $ ls -l /etc/localtime
    +lrwxrwxrwx 1 root root 36 ноя  6 18:54 /etc/localtime -> /usr/share/zoneinfo/Asia/Vladivostok
    +
    +
  • +
  • Remove the symlink
    $ sudo rm -rf /etc/localtime
    +
    +
  • +
  • Let’s change time zone to Moscow time. To do this, create a new symlink to the Moscow time Europe/Moscow in /usr/share/zoneinfo/.
    $ sudo ln -s /usr/share/zoneinfo/Europe/Moscow /etc/localtime
    +
    +
  • +
  • Check the time zone again.
    $ timedatectl
    +
    +
  • +

Task 2: NTP

We are going to set up an NTP server and then configure a client to use this NTP server. You need two VMs to test this. You can work in pairs to set up the client-sever infrastructure.

Installing and configuring an NTP server

    +
  • +

    First install NTP and it’s dependencies.

    +
    $ apt install -y ntp
    +$ apt install -y ntpstat
    +
    +
  • +
  • +

    Open the NTP configuration file /etc/ntp.conf and configure the remote NTP server.

    +
    $ vi /etc/ntp.conf
    +
    +

    Locate the following lines in the configuration file
    +

    +

    Replace those lines with the following

    +
    + + +
    pool time1.google.com iburst +pool time2.google.com iburst +pool time3.google.com iburst +pool time4.google.com iburst +
    +

    Also place a comment on pool ntp.ubuntu.com.
    +You should have something similar to the following after this modification:
    +

    +
  • +
  • +

    Restart and enable the NTP service to apply the change.

    +
    $ systemctl restart ntp
    +$ systemctl enable ntp
    +
    +
  • +
  • +

    Allow NTP port on the firewall.

    +
    $ sudo ufw allow ntp
    +
    +
  • +
  • +

    It will take some time for your NTP server to synchronize with the google NTP servers. Wait for about a minute and run the following command to check the sync status:

    +
    $ ntpstat
    +
    +

    You should get output similar to the following:

    +
    synchronised to NTP server (185.125.190.56) at stratum 3 
    +   time correct to within 977 ms
    +   polling server every 64 s
    +
    +

    The local NTP server is ready.

    +
  • +
  • +

    View all NTP peers.

    +
    $ ntpq -p
    +     remote           refid      st t when poll reach   delay   offset  jitter
    +==============================================================================
    + time1.google.co .POOL.          16 p    -   64    0    0.000   +0.000   0.004
    + time2.google.co .POOL.          16 p    -   64    0    0.000   +0.000   0.004
    + time3.google.co .POOL.          16 p    -   64    0    0.000   +0.000   0.004
    + time4.google.co .POOL.          16 p    -   64    0    0.000   +0.000   0.004
    ++time1.google.co .GOOG.           1 u   15   64    1   70.098   +3.123   8.037
    +*time2.google.co .GOOG.           1 u   17   64    1   38.087   +6.703   6.897
    ++time3.google.co .GOOG.           1 u   15   64    1   39.216   +4.192   7.641
    ++time4.google.co .GOOG.           1 u   18   64    1   62.781   +8.027  13.030
    +
    +
  • +

Configuring the client

We show two approaches for configuring the client. You can use either the systemd service systemd-timesyncd that comes with systemd-based systems by default, or use chronyd.

1. Using systemd-timesyncd
    +
  • Edit the service configuration file:
    $ vi /etc/systemd/timesyncd.conf
    +
    +Add the following line to the file
    NTP=<ntp-server-address>
    +
    +You should have something like this:
    +
    +Save and exit the file.
  • +
  • Restart the time sync service
    $ systemctl restart systemd-timesyncd
    +
    +
  • +
  • Check the status of the time synchronisation:
    $ timedatectl timesync-status
    +       Server: 192.168.132.136 (192.168.132.136)
    +Poll interval: 1min 4s (min: 32s; max 34min 8s)
    +         Leap: normal
    +      Version: 4
    +      Stratum: 2
    +    Reference: D8EF2308
    +    Precision: 4us (-18)
    +Root distance: 33.583ms (max: 5s)
    +       Offset: -9.704ms
    +        Delay: 675us
    +       Jitter: 0
    + Packet count: 1
    +    Frequency: -81,109ppm
    +
    +
  • +
2. Using chrony
    +
  • Install chrony
    $ apt -y install chrony
    +
    +
  • +
  • Edit the chrony configuration file:
    $ vi /etc/chrony/chrony.conf
    +
    +Locate the following lines and remove them
    +
    +Then add the address of the local NTP server we have configured in the form:
    server <ntp-server-address> iburst prefer
    +
    +
    +Save the configuration and exit.
  • +
  • Restart and enable the chronyd service.
    $ systemctl restart chrony
    +$ systemctl enable chrony
    +
    +
  • +
  • View all sources
    $ chronyc sources -v
    +
    +
    +

    Remember that chrony is a client/server utility. Therefore, other devices can be configured to use this “client” as their NTP “server”.

    +
    +
  • +

Test the setup

    +
  • First check the current time on the client system.
    $ timedatectl
    +               Local time: Вс 2022-11-06 18:35:16 MSK
    +           Universal time: Вс 2022-11-06 15:35:16 UTC
    +                 RTC time: Пн 2022-11-07 11:49:46
    +                Time zone: Europe/Moscow (MSK, +0300)
    +System clock synchronized: yes
    +              NTP service: active
    +          RTC in local TZ: no
    +
    +
  • +
  • Go to the NTP server and change the date to something completely wrong.
    $ timedatectl set-time 2030-06-10
    +
    +
  • +
  • Go to the client system and restart the timesync service to force an immediate sync with the local NTP server.
    $ systemctl restart systemd-timesyncd
    +OR
    +$ systemctl restart chrony
    +
    +
  • +
  • Check the time on the client machine, and you should see that the client has been configured with the new date we added to the server.
    $ timedatectl
    +               Local time: Пн 2030-06-10 00:01:19 MSK
    +           Universal time: Вс 2030-06-09 21:01:19 UTC
    +                 RTC time: Вс 2030-06-09 21:01:19
    +                Time zone: Europe/Moscow (MSK, +0300)
    +System clock synchronized: yes
    +              NTP service: active
    +          RTC in local TZ: no
    +
    +
  • +
  • The local NTP server will later sync with the Google NTP servers we configured earlier and correct its time. The client machine will then sync with the local NTP server and correct its time too.
  • +
  • Wait for about a minute or two and check the time on the NTP server to see the change.
    $ timedatectl
    +               Local time: Вс 2022-11-06 18:42:19 MSK
    +           Universal time: Вс 2022-11-06 15:42:19 UTC
    +                 RTC time: Вс 2022-11-06 15:42:19
    +                Time zone: Europe/Moscow (MSK, +0300)
    +System clock synchronized: yes
    +              NTP service: n/a
    +          RTC in local TZ: no
    +
    +The time on the client machine will also be corrected after a while.
  • +

Exercise 2: Package managers

A package manager automates the process of installing, configuring, upgrading, and removing packages.
+There are several package managers depending on the OS. This lab focuses on Ubuntu package managers.
+Note that Ubuntu is based on the Debian distro and it uses the same APT packaging system as Debian and shares a huge number of packages and libraries from Debian repositories.

Task 3: dpkg

dpkg is a tool that allows the installation and analysis of .deb packages. It can also be used to package software.

    +
  • +

    View a list of all installed packages

    +
    $ dpkg -l
    +
    +

    You should get an output similar to the following:
    +

    +
  • +
  • +

    Install a local .deb file using the command dpkg -i <deb-package>.
    +Let’s download and install Google Chrome browser

    +
    $ wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
    +$ dpkg -i google-chrome-stable_current_amd64.deb
    +
    +
  • +
  • +

    Verify the package installation using dpkg -s <deb-package>

    +
    $ dpkg -s google-chrome-stable
    +
    +

    You get an output similar to the following

    +
    Package: google-chrome-stable
    +Status: install ok installed
    +Priority: optional
    +Section: web
    +Installed-Size: 299404
    +Maintainer: Chrome Linux Team <chromium-dev@chromium.org>
    +Architecture: amd64
    +Version: 107.0.5304.87-1
    +Provides: www-browser
    +Depends: ca-certificates, fonts-liberation, libasound2 (>= 1.0.17), libatk-bridge2.0-0 (>= 2.5.3), libatk1.0-0 (>= 2.2.0), libatspi2.0-0 (>= 2.9.90), libc6 (>= 2.17), libcairo2 (>= 1.6.0), libcups2 (>= 1.6.0), libcurl3-gnutls | libcurl3-nss | libcurl4 | libcurl3, libdbus-1-3 (>= 1.5.12), libdrm2 (>= 2.4.60), libexpat1 (>= 2.0.1), libgbm1 (>= 8.1~0), libglib2.0-0 (>= 2.39.4), libgtk-3-0 (>= 3.9.10) | libgtk-4-1, libnspr4 (>= 2:4.9-2~), libnss3 (>= 2:3.26), libpango-1.0-0 (>= 1.14.0), libwayland-client0 (>= 1.0.2), libx11-6 (>= 2:1.4.99.1), libxcb1 (>= 1.9.2), libxcomposite1 (>= 1:0.4.4-1), libxdamage1 (>= 1:1.1), libxext6, libxfixes3, libxkbcommon0 (>= 0.4.1), libxrandr2, wget, xdg-utils (>= 1.0.2)
    +Pre-Depends: dpkg (>= 1.14.0)
    +Recommends: libu2f-udev, libvulkan1
    +Description: The web browser from Google
    + Google Chrome is a browser that combines a minimal design with sophisticated technology to make the web faster, safer, and easier.
    +
    +
  • +
  • +

    List all files installed by a package with the command dpkg -L <package-name>

    +
    $ dpkg -L google-chrome-stable
    +
    +
  • +
  • +

    Remove a package using dpkg -r <package-name>

    +
    $ dpkg -r google-chrome-stable
    +
    +
  • +
  • +

    Check the status of the package again and you should see that it is in the deinstall state, but the configuration files are still present.

    +
    $ dpkg -s google-chrome-stable
    +Package: google-chrome-stable
    +Status: deinstall ok config-files
    +
    +

    The -r option simply removes the package but all the configuration files are preserved.

    +
  • +
  • +

    To remove a package along with its configuration files, specify the --purge option.

    +
    $ dpkg --purge google-chrome-stable
    +
    +
  • +
  • +

    Check the status of the package to verify that the package and all its configuration files have been removed.

    +
    $ dpkg -s google-chrome-stable
    +dpkg-query: package 'google-chrome-stable' is not installed and no information is available
    +Use dpkg --info (= dpkg-deb --info) to examine archive files.
    +
    +
  • +

Task 4: APT

The Advanced Packaging Tools (APT) is a package manager on Ubuntu systems. apt acts as a user friendly tool that interacts with dpkg. Unlike dpkg, apt allows download and installation of packages from online repositories.

    +
  • View all available packages in your repository
    $ apt list
    +
    +
  • +
  • List only installed packages
    $ apt list --installed
    +
    +
  • +
  • Search for a package using apt search <package-name>. Let’s search for “chromium”.
    $ apt search chromium
    +
    +
  • +
  • After knowing the correct package name, you can find out more details about the package using apt show <package-name>
    $ apt show chromium-browser
    +
    +
  • +
  • Proceed to install the package using apt install <package-name>
    $ apt install chromium-browser
    +
    +
  • +
  • Remove a package using apt remove <package-name>
    $ apt remove chromium-browser
    +
    +
  • +
+

You can install or remove multiple packages in one command by separating them by spaces. e.g apt install package1 package2 package3.
+To remove multiple packages is similar e.g apt remove package1 package2 package3.

+
    +
  • Specify the remove option in combination with the --purge option to remove a package along with its configuration.
    $ apt remove --purge chromium-browser
    +
    +
  • +
  • Specify autoremove to remove the package and its dependencies, if not used by any other application. The format is shown below:
    $ apt autoremove <package-name>
    +
    +
  • +
  • You can update the APT package index to get a list of the available packages. This list can indicate installed packages that need upgrading, as well as new packages that have been added to the repositories.
    +The reposiroties are defined in the /etc/apt/sources.list file and in the /etc/apt/sources.list.d directory.
  • +
  • View the sources.list file to see all the defined repositories.
    $ cat /etc/apt/sources.list
    +
    +
  • +
  • To update the local package index with latest updates to repositories, use the following command:
    $ apt update
    +
    +
  • +
  • APT maintains a list of packages (package index) in /var/lib/apt/lists/.
    $ ls -lah /var/lib/apt/lists/
    +
    +
  • +
  • To upgrade a single package that has been previously installed, run the apt install <package-name> command.
  • +
  • To upgrade all installed packages, run the following command:
    $ apt upgrade
    +
    +This command will upgrade all packages that can be upgraded without installing additional packages or removing conflicting installed packages.
  • +
  • Run apt full-upgrade to upgrade the packages, the kernel, and remove conflicting packages or install new ones. The full-upgrade option is “smart” and can remove unnecessary dependency packages, or install new ones (if required).
  • +

Task 5: sources.list

    +
  • View the sources.list file without the comments
    $ grep -o '^[^#]*' /etc/apt/sources.list
    +deb http://ru.archive.ubuntu.com/ubuntu/ jammy main restricted
    +deb http://ru.archive.ubuntu.com/ubuntu/ jammy-updates main restricted
    +deb http://ru.archive.ubuntu.com/ubuntu/ jammy universe
    +deb http://ru.archive.ubuntu.com/ubuntu/ jammy-updates universe
    +deb http://ru.archive.ubuntu.com/ubuntu/ jammy multiverse
    +deb http://ru.archive.ubuntu.com/ubuntu/ jammy-updates multiverse
    +deb http://ru.archive.ubuntu.com/ubuntu/ jammy-backports main restricted universe multiverse
    +deb http://security.ubuntu.com/ubuntu jammy-security main restricted
    +deb http://security.ubuntu.com/ubuntu jammy-security universe
    +deb http://security.ubuntu.com/ubuntu jammy-security multiverse
    +
    +
  • +
  • Each line in the sources.list has a structure in the following order: Type, URL, Distribution, and Component. Let’s analyse the first line in the sources.list above. +
      +
    • The Type is deb. The term deb indicates that it is a repository of binaries.
    • +
    • The repository URL is http://ru.archive.ubuntu.com/ubuntu/. This is the location of the repository where the packages will be downloaded.
    • +
    • The Distribution is jammy. This is the short code name of the release. Run $ cat /etc/os-release to view the VERSION_CODENAME of your system.
    • +
    • The Components are main and restricted. These are information about the licensing of the packages in the repository. +
        +
      • main contains Canonical supported free and open source software.
      • +
      • restricted contains proprietary drivers for devices.
      • +
      • universe contains community supported free and open source software.
      • +
      • multiverse contains software restricted by copyright or legal issues.
      • +
      +
    • +
    +
  • +

Adding repositories

    +
  • Let’s try to install a package that is not in Ubuntu repository by default. Install MongoDB
    $ apt install mongo-db
    +
    +You should get an output similar to the following
    Reading package lists... Done
    +Building dependency tree... Done
    +Reading state information... Done
    +E: Unable to locate package mongodb-org
    +
    +We need to add the MongoDB repository to our repository sources.
  • +
  • The basic syntax of the add-apt-repository command is as follows:
    $ add-apt-repository [options] repository
    +
    +The repository can be a regular repository entry that can be added to sources.list in the format deb http://ru.archive.ubuntu.com/ubuntu/ distro component or a PPA repository in the ppa:<user>/<ppa-name> format.
  • +
  • First import MongoDB PGP key. The key is used to verify the integrity of packages that are downloaded from this repository.
    $ curl -fsSL https://www.mongodb.org/static/pgp/server-6.0.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/mongodb-6.gpg
    +
    +
  • +
  • Create a list for MongoDB
    $ add-apt-repository 'deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse'
    +
    +
    +

    Alternatively, you can manually create the list file and add the repository to it.

    +
    echo "echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list
    +
    +
    +
  • +
  • View the new list file that has been created
    $ cat /etc/apt/sources.list.d/*mongodb*.list
    +
    +
  • +
  • The add-apt-repository command automatically updates the local package database for you. If you added the repository via an alternative means, you need to run $ apt update to update the local package database.
  • +
  • View the package index directory to see the MongoDB related files created.
    $ ls -lah /var/lib/apt/lists/ | grep mongodb
    +
    +
    +

    MongoDB requires libssl1.1 (>= 1.1.1). So let’s download and install this package before proceeding

    +
    $ wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2.16_amd64.deb
    +$ dpkg -i ./libssl1.1_1.1.1f-1ubuntu2.16_amd64.deb
    +
    +
    +
  • +
  • You can now proced to install MongoDB from the newly enabled repository.
    $ apt install mongodb-org
    +
    +
    +

    You can answer the prompt that asks you to type Y to verify your choice by adding the option -y to your install command in the format apt install mongodb-org -y. This is very useful when writing scripts that install packages.

    +
    +
  • +
  • Verify that MongoDB has been installed
    $ mongod --version
    +
    +
  • +
  • You can remove a previously enabled repository using the format add-apt-repository --remove repository. Let’s remove the MongoDB repository
    $ add-apt-repository --remove 'deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse'
    +
    +
  • +
  • Run the following commands to verify that the repository has been removed.
    $ ls -lah /var/lib/apt/lists/ | grep mongodb
    +$ cat /etc/apt/sources.list.d/*mongodb*.list
    +
    +
  • +

Task 6: Creating a custom Ubuntu package

    +
  • Create a directory for the package. We will name the package snalab.
    $ mkdir snalab
    +
    +
  • +
  • Create the internal structure by placing your program files where they should be installed to on the target system. In this case, we want to place the program file in /usr/local/bin on the target system, therefore we create the directory snalab11/usr/local/bin.
    $ mkdir -p snalab/usr/local/bin
    +
    +
  • +
  • Create the program file (script) snalab/usr/local/bin/snalab and add the following to it.
    #!/bin/bash
    +echo "Hello World. This is SNA Lab"
    +
    +
  • +
  • Give the program file execute permission
    chmod +x snalab/usr/local/bin/snalab
    +
    +
  • +
  • Create a directory DEBIAN in snalab
    $ mkdir snalab/DEBIAN
    +
    +
  • +
  • Create the control file in the DEBIAN directory. The control file contains package description and information about the maintainer. Create the file snalab/DEBIAN/control and add the following content.
    + + + +
    Package: snalab +Version: 1.0 +Maintainer: Awwal +Architecture: all +Description: Hello SNA +
    +
    +

    These are mandatory fields in the control file. There are several other fields that can be defined.

    +
    +
  • +
  • Build the package with dpkg
    $ dpkg-deb --build --root-owner-group snalab
    +
    +
    +

    The --root-owner-group flag makes all deb package content owned by the root user. Without this flag, all files and folders would be owned by your user, which might not exist in the target system the deb package would be installed to.

    +
    +
  • +
  • You should have a debian package in your CWD named snalab.deb. Proceed to install this package.
    $ dpkg -i snalab.deb
    +
    +You should get an output similar to the following
    Selecting previously unselected package snalab11.
    +(Reading database ... 195723 files and directories currently installed.)
    +Preparing to unpack snalab11.deb ...
    +Unpacking snalab11 (1.0) ...
    +Setting up snalab11 (1.0) ...
    +
    +
  • +
  • Run the newly installed package.
    $ snalab
    +Hello World. This is SNA Lab
    +
    +
  • +
+

You can have a compiled application in place of the script we have used.

+

Questions to answer

+

Instruction: Show all steps taken including screenshots of commands executed, files created, and configuration added.

+
    +
  1. What alternative do you have for configuring your NTP server pool if you don’t want to be dependent on NTP servers on the internet. The time must be accurate and appear to be in sync with other devices globally. Describe how you will perform this setup. +
    +

    The accuracy of the time should be strongly considered.

    +
    +
  2. +
  3. You have two Linux servers whose time won’t stay in sync for various reasons. They tend to drift so much that they have a 30 second difference after 7 days of operation. What can you do to ensure that they stay in sync with each other without relying on external devices or servers? +
    +

    Hint: Inaccurate time is not a problem in this case. The goal is to ensure that both servers are in sync.

    +
    +
  4. +
  5. What are the differences between apt and apt-get?
  6. +
  7. Why should System Administrators prefer apt upgrade over apt full-upgrade?
  8. +
  9. Show how you will install Atom text editor from the apt repository. Provide explanation for every step you take. +
      +
    • After adding the repository, show the output when you run $ apt search atom
    • +
    +
    +

    You are not allowed to manually download the debian package and install it.

    +
    +
  10. +
  11. Create an Ubuntu package that meets the following requirements: +
      +
    • The package creates the directory /var/helloworld/ on the target system.
    • +
    • The package contains the python script /var/helloworld/helloworld.py. The python script is simple:
      #!/usr/bin/env python3
      +print("Hello, World!")
      +
      +
    • +
    • The package should deploy a bash script helloworld that executes /var/helloworld/hello.py on the target system.
    • +
    +
  12. +

Take the following steps after building the package

    +
  • List the content of the package with the command $ dpkg -c <package-name>.deb.
  • +
  • Install the package and show all artifacts added to your system by the package.
  • +
+

After a user installs your package, he should be able to run $ helloworld from the terminal without additinonal steps.
+The expected flow of execution is helloworld (bash script) -> helloworld.py -> Output (Hello, World!)

+

Bonus

+

The bonus tasks encourage you to failiarize yourself with CentOS and RPM packages. Linux distributions such as Red Hat and CentOS are very common and they utilize RPM packages.

+
    +
  1. Find and add new source repository to be used for yum. +
      +
    • Install a package from it (for example MongoDB).
    • +
    • Check with the RPM package manager to verify that the package was installed, and provide details such as dependencies needed.
    • +
    • Find logs related to all actions from the previous steps.
    • +
    +
  2. +
  3. Sometimes you might have access to an open-source application source code but might not have the RPM file to install it on your system. In that situation, you can either compile the source code and install the application from source code or build an RPM file from source code by yourself and use the RPM file to install the application. There might also be a situation where you want to build a custom RPM package for the application that you developed.
    +Create an RPM package to deploy any application of your choice.
  4. +
+ + + + + + + + +