Skip to content

CSCI-490: Web-Server Monitoring & Security

brief explanation

this page is a storyboard, documentation, and demonstration of my "CSCI-490, Information System Development" course project.

the process will depend on 3 virtual machines in a virtual NAT network: a server, an attacker, and a remote that monitors the server.

the goal of the project is to demonstrate the operation of an intrusion detection/prevention system (in this case, suricata) by simulating -- with moderate accuracy -- attacks on a web-server by a hacker, and the server's detection and prevention of these attacks.

the attacker will use custom scripts written in python (with pipenv) and bash to automate those attacks.

the defender detects the attacks through suricata, and demonstrates different ways to mitigate each attack.

a remote machine with grafana then analyzes the data from the server through prometheus and loki data sources.

illustration of the goal

on the ubuntu server

apache

the apache core can load modules that extend its functionality, similar to plugins in other software. these modules enable the server to perform various tasks, such as processing PHP applications using the appropriate engine.

by using a combination of modules and configuration options, the apache server can communicate with clients, sending and receiving packets of data, and connect to databases to store and manipulate that data.

how apache works

suricata

suricata, a network intrusion detection and prevention system, analyses incoming traffic and provides a response if an attack is detected.

how suricata works

prometheus

prometheus scrapes metrics from configured targets at specified intervals.

it stores this data locally and provides an HTTP API for querying it.

visualization tools (like grafana) can then use this API to fetch the data from prometheus and display it in customizable dashboards.

how prometheus works

on the remote machine

prometheus-grafana connection

prometheus scrapes stats from the node exporter and stores them in its time-series database. grafana then accesses prometheus to retrieve and visualize these stats.

prometheus to grafana connection, hardware

apache stats are exported to prometheus using the prometheus apache exporter, and grafana retrieves these stats from prometheus for visualization.

prometheus to grafana connection, apache

loki-grafana connection

suricata logs are aggregated by promtail and loki, then exported to grafana through the loki data source.

loki to grafana connection

grafana visualization

the metrics, logs and all other data from the data sources are visualized in grafana with custom dashboards.

grafana dashboard

preparation

virtual machines

for the virtualization software we will use virt-manager, with quemu/libvirt. we will need 3 virtual machines:

networking

in order for the machines to communicate, we have to configure the network interfaces for each.

we will use NAT for the machines to communicate with each other, the host, and the internet (using the host's IP address), without being able to communicate with other machines on the host's network.

most distributions set-up a default NAT upon the installation of virt-manager and the successful configuration of libvirt.

if there is no default NAT, creating one is as simple as clicking "add network", selecting "NAT" from the drop-down menu, and clicking confirm.

shared filesystem (optional)

in order to write scripts, configuration files, and other things that might involve a text editor, i prefer to use my host's editor.

to avoid installing neovim on every machine, and copying the configuration, i will use a shared filesystem. this way i can just place shared files in a directory accessible by each machine, and use neovim on my host with all my plugins and configurations.

to follow along, create a directory vms/, with the following subdirectories:

  • vms/kali
  • vms/ubuntu
  • vms/arch

each directory will have files accessible to the respective machine.

we will be able to edit these files using the host's environment, and still make them accessible to the vms.

this will come in very handy later on, when writing the automation scripts, and designing the test website for the server.

to add the shared filesystem, with virt-manager open, double click on the created virtual machine. then click on "view", then "details", then click the "add hardware" button on the bottom left:

add hardware - virt-manager

select "filesystem", then select the proper driver and fill in the created directory. after that, choose a target path to create. mine will be "/shared".

i also created a mount target on the virtual machine; ~/shared

filesyste - virt-manager

in order to access the shared directory on the virtual machines, we need to mount the created target path earlier, to a mount point on the machine. to do that, run the following command on the virtual machine:

sudo mount -t <driver-name> <target-path> <mount-target>

for example, in my case, it would be:

sudo mount -t 9p /shared ./shared

i'm comfortable running the mount command every time i start the virtual machines, but you can also automate the process. perhaps by writing a script and adding it to $PATH, or by creating a systemd service.

assets

these will come in handy later on:

ubuntu server configuration

install and configure apache

after installing ubuntu on the virtual machine, we first make sure the system is up to date. run the following command:

sudo apt update && sudo apt upgrade

we then install apache by running the following command:

sudo apt install apache2

after the installation is complete, the server should be ready.

to check, we can visit the IP address of the virtual machine using the host's web browser. if the set-up is successful, the below page will appear:

apache default page - ubuntu

if it doesn't, reboot the machine.

now, we can create our websites by placing them in the /var/www/html/<website-name> directory. this allows us to visit the site through http://<vm-ip-address>/<website-name>.

install and setup mysql

to install mysql-server:

sudo apt install mysql-server php libapache2-mod-php php-mysql

then, simply run the following command to set it up:

sudo mysql_secure_installation

during the setup, you will be asked to set a user and password. you can use "root" for both, as this is a simple demo, and the website i provided in assets uses the username "root" and password "root" for the database connection.

you can then start mysql, and enter your password with the following command:

sudo mysql --password

when trying to import an SQL file for a database, you will be asked for 2 passwords, the first one is your sudo password, and the second one is the mysql password we set earlier. use the following command to do so:

sudo mysql -u <username> -p <database_name> < <file.sql>

for example, in my case, it would be:

sudo mysql -u root -p mysite < mysitedb.sql

install and configure prometheus

to install prometheus, we run the following command:

sudo apt install prometheus

the configuration can be placed anywhere, the default is in /etc/prometheus/prometheus.yml to start prometheus, we simply run the prometheus command. to pass a custom configuration file, we can run the following:

prometheus --config.file=<path-to-configuration-file>

the ubuntu prometheus package provides a systemd service that is enabled automatically. the default configuration is a very good starting point for our purposes:

/etc/prometheus/prometheus.yml
#(1)
global:
    scrape_interval: 15s
    evaluation_interval: 15s
    external_labels:
    monitor: 'example'
alerting:
    alertmanagers:
    - static_configs:
        - targets: ['localhost:9093']
scrape_configs:
    - job_name: 'prometheus'
    scrape_interval: 5s
    scrape_timeout: 5s
    static_configs:
        - targets: ['localhost:9090']
    - job_name: 'node'
    static_configs:
        - targets: ['localhost:9100']
  1. download prometheus.yml

download and configure apache_exporter

apache exporter is a prometheus exporter for apache.
it exports "mod_status" statistics via HTTP for Prometheus consumption.

we can download the apache_exporter binary from the github repository's releases page, or with the following command:

curl -s "https://api.github.com/repos/Lusitaniae/apache_exporter/releases/latest" \
| grep browser_download_url \
| grep linux-amd64 \
| cut -d '"' -f 4 | wget -qi -

after downloading the tarball, we need to extract it, then move the binary into the /usr/local/bin directory, and make it executable:

1
2
3
tar xvf apache_exporter-*.linux-amd64.tar.gz
sudo cp apache_exporter-*.linux-amd64/apache_exporter /usr/local/bin
sudo chmod +x /usr/local/bin/apache_exporter

we can now use the following command to run the exporter:

apache_exporter --insecure \
    --scrape_uri="http://localhost/server-status/?auto" \
    --telemetry.endpoint=/metrics

we can also, optionally, create a service file in /etc/systemd/system/apache_exporter.service with the following:

/etc/systemd/system/apache_exporter.service
#(1)
[Unit]
Description=Prometheus
Documentation=https://github.com/Lusitaniae/apache_exporter
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User=user
Group=user
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/local/bin/apache_exporter \
    --insecure \
    --scrape_uri=http://localhost/server-status/?auto \
    --telemetry.endpoint=/metrics

SyslogIdentifier=apache_exporter
Restart=always

[Install]
WantedBy=multi-user.target
  1. download apache_exporter.service

this allows us start, restart, and stop the exporter using a systemd service. we can start the exporter service and enable it after reloading the daemon:

1
2
3
sudo systemctl daemon-reload
sudo systemctl start apache_exporter.service
sudo systemctl enable apache_exporter.service

after the exporter is up and running, we can configure the exporter job in prometheus. we add the below snippet under scrape_configs::

/etc/prometheus/prometheus.yml
1
2
3
4
5
6
scrape_configs:
    # ...
    - job_name: 'apache'
    scrape_interval: 5s
    static_configs:
        - targets: ['localhost:9117']

our complete prometheus.yml file should now look like this:

/etc/prometheus/prometheus.yml
#(1)
global:
    scrape_interval: 15s
    evaluation_interval: 15s
    external_labels:
    monitor: 'example'
alerting:
    alertmanagers:
    - static_configs:
        - targets: ['localhost:9093']
scrape_configs:
    - job_name: 'prometheus'
    scrape_interval: 5s
    scrape_timeout: 5s
    static_configs:
        - targets: ['localhost:9090']
    - job_name: 'node'
    static_configs:
        - targets: ['localhost:9100']
    - job_name: 'apache'
    scrape_interval: 5s
    static_configs:
        - targets: ['localhost:9117']
  1. download prometheus.yml

we can now restart prometheus and the apache metrics should be available for use with grafana later:

sudo systemctl restart prometheus

installing and configuring suricata

the package is available in the standard ubuntu 22.04 repository, so we can install it with apt:

sudo apt install suricata

now that we have suricata installed, we need to configure a few things. particularly:

  • set the correct HOME_NET
  • ensure correct interface is used
  • include a ruleset

we can find the suricata configuration in /etc/suricata/suricata.yaml. edit the configuration file and make sure to configure HOME_NET and interface correctly:

/etc/suricata/suricata.yaml
vars:
    address-groups:
        HOME_NET: "[192.168.122.0/24]"
# ...
af-packet:
    - interface: enp1s0
# ...
pcap:
    - interface: enp1s0
# ...
netmap:
    - interface: enp1s0
# ...
pfring:
    - interface: enp1s0

then, we add the ruleset(s). the following command will display the available ruleset providers:

sudo suricata-update list-sources

to enable a source, run the following command:

sudo suricata-update enable-source <vendor>/<name>;

after enabling all the sources we need, we can write the rules to the default directory suricata looks in; /etc/suricata/rules/

sudo suricata-update -o /etc/suricata/rules/

the above command will write the rules to suricata's default directory, in the file suricata.rules. the file is sourced by suricata by default.

in order to change the default directory for rules, or to add more rule files, edit the following part of the configuration file:

/etc/suricata/suricata.yaml
# ...
default-rule-path: /etc/suricata/rules

rule-files:
    - suricata.rules
# ...

we can start, restart, stop, and check the status of the suricata service with systemd:

sudo systemctl restart suricata
sudo systemctl status suricata

now that suricata is up and running, we can export the logs to grafana on the host.

in order to do that, we need to configure loki and promtail. every loki release includes a promtail binary.

i have written a script to download the latest release from the loki releases page, and create systemd services for loki and promtail, as well as provide sample configs.

before running the script, make sure unzip, curl, and wget are installed:

sudo apt install unzip curl wget

below is the script:

#(1)
set -e
VLOKI=$(
    curl -s "https://api.github.com/repos/grafana/loki/releases/latest" |
        grep -Po '"tag_name": "v\K[0-9.]+'
)
sudo mkdir /opt/loki
[[ -f /opt/loki/loki.zip ]] || sudo wget -O /opt/loki/loki.zip \
    "https://github.com/grafana/loki/releases/download/v${VLOKI}/loki-linux-amd64.zip"
[[ -f /opt/loki/promtail.zip ]] || sudo wget -O /opt/loki/promtail.zip \
    "https://github.com/grafana/loki/releases/download/v${VLOKI}/promtail-linux-amd64.zip"
sudo unzip /opt/loki/loki.zip -d /opt/loki
sudo unzip /opt/loki/promtail.zip -d /opt/loki
sudo ln -s /opt/loki/loki-linux-amd64 /usr/local/bin/loki
sudo ln -s /opt/loki/promtail-linux-amd64 /usr/local/bin/promtail
sudo wget -O /opt/loki/loki-local-config.yaml \
    "https://raw.githubusercontent.com/grafana/loki/v${VLOKI}/cmd/loki/loki-local-config.yaml"
cat <<EOF | sudo tee /opt/loki/promtail-local-config.yaml
server:
    http_listen_port: 9080
    grpc_listen_port: 0
positions:
    filename: /data/loki/positions.yaml
clients:
    - url: http://localhost:3100/loki/api/v1/push
scrape_configs:
    - job_name: system
static_configs:
    - targets:
        - localhost
        labels:
        job: suricata_logs
        __path__: /var/log/suricata/eve.json
EOF
cat <<EOF | sudo tee /etc/systemd/system/loki.service
[Unit]
Description=Loki service
After=network.target

[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/loki -config.file /opt/loki/loki-local-config.yaml

[Install]
WantedBy=multi-user.target
EOF
cat <<EOF | sudo tee /etc/systemd/system/promtail.service
[Unit]
Description=Promtail service
After=network.target

[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/promtail -config.file /opt/loki/promtail-local-config.yaml

[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable loki.service
sudo systemctl enable promtail.service
  1. download the script

we can optionally reboot the server vm now.

the default configuration files created by the script above (for loki and promtail) are found in /opt/loki/{loki|promtail}-local-config.yaml. these files can be edited to customize the behavior of loki and promtail.

the systemd service files run the loki and promtail binaries, and pass through those config files as arguments.

network configuration

since we're using NAT with a virtual DHCP server, we can't guarantee that the machine's ip address stays the same. for that, we need to configure a static ip address.

the network configuration is in /etc/netplan. in that directory, there will be a file that ends with the .yaml extension. we open the file with our favorite editor:

sudo vim /etc/netplan/*.yaml

we should see something like the following:

/etc/netplan/99-netconfig.yaml
1
2
3
4
5
network:
    ethernets:
    enp1s0:
        dhcp4: true
    version: 2

enp1s0 is our network interface. the name may differ from one machine to another. to configure static networking, we need to configure a few things:

  • disable DHCP
  • specify the static address
  • specify the default route
  • specify the DNS address

a proper configuration should look something like the following:

/etc/netplan/99-netconfig.yaml
#(1)
network:
    ethernets:
    enp1s0:
        dhcp4: false
        addresses:
        - 192.168.122.57/24
        routes:
        - to: default
            via: 192.168.122.1
        nameservers:
        addresses: [8.8.8.8, 1.1.1.1]
    version: 2
  1. download 99-netconfig.yaml

after the configuration is complete, we simply run the following command to apply the changes:

sudo netplan apply

we can also (optionally) reboot, just in case.

grafana setup

installation

on the remote machine, install grafana with the distribution's package manager. in my case, it's arch:

sudo pacman -S grafana

we can then start the grafana service:

sudo systemctl start grafana.service

we can also, optionally, enable it at startup

sudo systemctl enable grafana.service

grafana sign-up

when the grafana service is started, it will be listening on port 3000 by default. in order to use grafana, we need to create a new account. navigating to http://localhost:3000 using the machine's web browser will open the grafana sign-up page.

we create a user and password. these are the credentials we'll use to access our grafana dashboards. after the signup is successful, we see the following page, where we can log-in to our new account:

grafana log-in page

adding the prometheus data source

after signing in successfully, we can access our dashboards.

to create a dashboard, we first need some data sources. the first data source we'll add is prometheus. before adding the source, we have to make sure that the prometheus service is running on the ubuntu server with the following command:

sudo systemctl status prometheus.service

if the prometheus service is not running, we can start it, and optionally enable it at startup, with the following commands:

sudo systemctl start prometheus.service
sudo systemctl enable prometheus.service

this runs prometheus with the default config found in /etc/prometheus/prometheus.yml, which typically has the target as localhost:9100 and job_name as node. in order to add the data source, we click on the menu button on the top left, and go to "connections", then click "add new connection":

grafana add connections

after doing so, we search for the prometheus data source (available in the default grafana sources):

grafana prometheus data source

then, we configure the data source as per our ubuntu server configuration.

prometheus typically runs on port 9090 by default. we give the data source a name, in my case it's ubuntu-server-prometheus, and specify the server url, in my case http://192.168.122.57:9090. that is the ip address of the ubuntu server, and the port 9090.

we can also, optionally, set-up an authentication method. for the purposes of this demo, i won't be doing that.

grafana prometheus data source setup

after the configuration is done, we click on "save and test". if it's successful, the below message should appear:

grafana added data source successfully

we can now start configuring the dashboards with the new data source.

create the first dashboard with prometheus

now that the data source is added, we can start creating a dashboard.

we click on the menu button, and navigate to "dashboards", then click "new dashboard". after that, we click the "add visualization" button to add a graph.

grafana add visualization

then, we select the prometheus data source we just added:

grafana select data source

the first graph we want to add is the CPU. to do that, we remove the default random walk query, and create our own with the following query:

avg without(cpu) (rate(node_cpu_seconds_total{job="node", instance="localhost:9100", mode!="idle"}[1m]))

this query does a few things:

  • get the average rate, excluding the "cpu" field. this is because the CPU can have multiple cores. if each core is included, the maximum usage percentage would be the number of CPUs * 100. for example, in a system with 8 CPUs, the maximum usage would be 800%
  • the total number of seconds the CPU spent in each mode, excluding the "idle" mode
  • run the PromQL query at an interval of 1 minute

the result should look something like the following:

grafana first graph

we can use the right-side panel to customize the look of the graph. for example, below is what my fully customized cpu utilization graph looks like:

grafana first graph

we can now do the same for the RAM usage, available storage, and network traffic. below are the PromQL queries for each:

  • RAM
A
node_memory_MemTotal_bytes{
    instance="localhost:9100", job="node"
} - node_memory_MemFree_bytes{
    instance="localhost:9100", job="node"
} - node_memory_Cached_bytes{
    instance="localhost:9100", job="node"
} - node_memory_Buffers_bytes{
    instance="localhost:9100", job="node"
}
B
node_memory_MemTotal_bytes{
    instance="localhost:9100", job="node"
}
  • storage
A
node_filesystem_size_bytes{
    job="node", instance="localhost:9100", mountpoint="/"
} - node_filesystem_free_bytes{
    job="node", instance="localhost:9100", mountpoint="/"
}
B
node_filesystem_free_bytes{
    job="node", instance="localhost:9100", mountpoint="/"
}
  • network
A
irate(node_network_receive_bytes_total{
    job="node", instance="localhost:9100", device="enp1s0"
}[1m])
B
irate(node_network_transmit_bytes_total{
    job="node", instance="localhost:9100", device="enp1s0"
}[1m])

my final dashboard with the previous queries and graphs looks like this: grafana custom dashboard

import a dashboard for apache

now that we know how to create our own dashboards, i'll demonstrate how we can import third-party dahsboards created by the community. the dashboard we're going to import will be an apache http dashboard from grafana labs' jsonnet libraries on github.

before we clone the repository and generate the dashboards, we have to make sure that we have go and the install tools:

1
2
3
sudo pacman -S --needed go
go install "github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest"
go install "github.com/monitoring-mixins/mixtool/cmd/mixtool@latest"

we then clone the repository and cd into it:

git clone "https://github.com/grafana/jsonnet-libs"
cd jsonnet-libs

now that we're in the jsonnet-libs directory, we can generate the dashboards. the one we need is called apache-http-mixin.

to generate the dashboards, we cd into the sub-directory, and run the make command, but since the tools we installed with the go install command will be in $HOME/go/bin, we need to make sure that this directory is in $PATH first.

1
2
3
export PATH=$PATH:$HOME/go/bin
cd apache-http-mixin
make

the dashboard we're looking for will be in the dashboards_out directory, with the name apache-http.json. to import the dashboard, we go to "Dashboards" in grafana, and click on the plus button in the upper right corner, then click on "import dashboard", then "upload dashboard json file". we select the json file we just generated in the dashboards_out directory.

below is what our final result will look like, after choosing the appropriate data source and job:

grafana imported apache dashboard

suricata logs and dashboard

to display the suricata logs in our grafana dashboard, we need to configure a loki data source first. follow the same steps in the "adding the prometheus data source" section above to add the loki data source, and configure the correct IP address and port number.

our promtail configuration file -- which was created in the "installing and configuring suricata" section -- uses the port 3100.

grafana - adding loki

after adding the loki data source, we can create our suricata logs dashboard.

grafana - suricata logs

grafana alerts

by navigating to "alerting rules", we can add and configure grafana alerts:

grafana alerts page

we can also add custom contact points:

grafana slack contact point

these contact points allow us to recieve alerts from grafana:

grafana slack alert

i won't cover alerting in this walk-through, since there's no "standard" or "common" way to do so. figure out your preferred approach. writing alert rules is similar to writing PromQL queries.

the example above uses alerts for pending apt updates, and for suricata alert and anomaly events. these alerts are sent to a slack contact point, using a slack application created by following their documentation.

the attacker

setup

to start, i created a program that runs a couple of scripts to automate the attack on the server. in order to follow along, make sure to download the vulnerable website and the attack scripts from the assets entry. these will be used in the attack demonstration.

the attack scripts are placed on the kali machine, and the website is placed on the ubuntu server as instructed in the last part of the "install and configure apache" section.

don't forget to change the credentials in website/connect.php to align with the credentials chosen during the "install and setyp mysql" section.

an optional step is to add the ubuntu server's ip address to /etc/hosts.

kali /etc/hosts

this way, we can access the site using http://mysite.local instead of the IP address of the ubuntu machine.

the attack scripts

the program i wrote checks for necessary dependencies to run.

by necessary, i mean essential to the program's operation, and without which the program would fail to launch.

attack-scripts essential dependencies check

if the necessary dependencies are met, then the script continues to check optional dependencies; those not necessary for the program to run, but essential for some attack scripts.

attack-scripts optional dependencies check

Note

openssl isn't required.

it is only there to demonstrate the dependencies check.

when the script is launched, a green option indicates the ability to run the attack, and a red option indicates missing dependencies.

attack-scripts possible attacks

you'll notice a disclaimer when starting the script:

attack-scripts disclaimer

to reiterate;

don't be an idiot.

don't use the script on some random website and expect it to work. and if it does, by some miracle, actually work, i decline any liability for your actions or their consequences.

testing the IDS

to start off, we'll try a simple DoS attack, and check a custom grafana dashboard i created for suricata. to run the DoS attack, select the "denial of service" option in the attack script, and input the IP address of the ubuntu server.

DoS attack demo

and indeed, the attack is detected by suricata, and displayed in the alerts dashboard in grafana:

DoS detected by suricata

brute-force login

the vulnerable website (attached in assets), has a login box. on the first page (<website>/1/), the credentials are stored in plain text in the database.

the database is also available in the "assets" section.

page 1 login prompt

when trying to login with invalid credentials, we get a 401 error with a fail message.

fail message

we can use the script to perform a brute-force attack. this will allow us to choose a username to try, then the script will run all the passwords in all the wordlists listed in the script's wordlists directory.

select the "brute-force (web)" option, and in the url section, type http://<website>/1/. for the username, type "admin". this is the username we'll try to find the password for.

successful brute-force

we can see the credentials we got from the brute-force attack are valid.

SQL injection

if we navigate to the second page of the website (<website>/2/), we'll see another login form.

page 2 login prompt

when we try to login with invalid credentials, we'll see the same fail message as before.

fail message

this time, instead of brute-force, we will try a quick and dirty trick: SQL injection.

in the first field of the form (username), we will type the following:

admin' OR '1'='1`

fill the password with anything.

SQLi attempt

we will notice a success message, meaning the login was successful. but the password isn't in plain text this time.

SQLi successful, retrieved hashed password

dictionary attack

as seen in the "SQL injection" section, the password we got was hashed.

determining the type of the hash is trivial. you can use the "hash-identifier" tool available on kali linux, or any online hash identifier.

hash-identifier identifies hash type

from the provided "possible hashs", we can assume it's SHA-256.

now we can run a dicitonary attack against the hash. start the attack script, and select the "brute-force (local)" option. the script will ask you to choose the hash type, then, it will utilize hashcat to crack the hash with the wordlists mentioned earlier in the "brute-force login" section.

hashcat running

after a couple of seconds, hashcat will find the correct password.

hashcat cracked password

if we copy the password (lll-222-l9eeemailaaddress.tst), and use it with the "admin" username to log in, we can confirm that this is the password.

resolution

this section will be about providing possible mitigation techniques for each of the attacks described in "the attacker" section.

problem 1: no DoS protection

to mitigate DoS attacks, we can make suricata operate as an IPS.

to switch to IPS mode, we can add the following line in /etc/default/suricata:

/etc/default/suricata
# ...
LISTENMODE=nfqueue
# ...

problem 2: plain text passwords

storing passwords in plain text is horrible practice, but unfortunately quite common.

credentials should be stored in a secure manner, by hashing them before storing them in the database. with PHP, we can achieve that using the password_hash() fucntion.

1
2
3
4
5
<?php
// hash a password with BCrypt
password_hash($password, PASSWORD_BCRYPT);
// verify a password
password_verify($password, $hash)

problem 3: unsanitized input

SQLi is a result of poorly written backends, and unsanitized input. below is an example of problematic code:

1
2
3
4
5
6
7
<?php
$username = $_POST['username'];
$password = $_POST['password'];
// unparameterized statement, can be escaped
$sql = "SELECT * FROM `users` WHERE `username` = '$username' AND `password` = '$password'";
// executed, and can directly interpolate user input
$result = mysqli_query($con, $sql);

to sanitize the input, we can use prepared statements. below is a more secure version of the problematic code:

<?php
$username = $_POST['username'];
$password = $_POST['password'];
// prepare the statement
$sql = "SELECT * FROM `users` WHERE `username` = ? AND `password` = ?";
$stmt = mysqli_prepare($con, $sql);
// bind parameters
mysqli_stmt_bind_param($stmt, 'ss', $username, $password);
// execute
mysqli_stmt_execute($stmt);
$result = mysqli_stmt_get_result($stmt);

comments

comment system powered by mastodon

enter your fediverse instance to add a comment