Angular - Set headers for every request - Stack Overflow

nginx reverse proxy configuration settings?

Hey all,
After recently working through my nginx reverse proxy configuration, I noticed mine, while working as expected, could be structured much cleaner than it currently is.
So I'm curious about two things
  1. How others have structured their nginx.conf, sites-enabled/default, conf.d/jellyfin.conf. and any other config files they may have. It seems the best practice is to define each area within its own config file. For example, http headers configured in conf.d/http_headers.conf and included in nginx.conf
  2. What specific settings do others use for both security and performance for jellyfin - obviously the jellyfin docs have nginx settings listed, but curious what others do beyond these.
For context, I run a local static website along with proxying to jellyfin and I'm sure I could be doing things better than I currently am.
Here's my nginx.conf for example:
## ================================= ## to test configuration for errors ## run: gixy /etc/nginx.conf ## ================================= user www-data; worker_processes auto; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 1024; multi_accept on; } http { charset utf-8; sendfile on; tcp_nopush on; tcp_nodelay on; server_tokens off; log_not_found off; types_hash_max_size 2048; # size Limits & Buffer Overflows client_body_buffer_size 128K; client_header_buffer_size 16k; client_max_body_size 32M; large_client_header_buffers 4 16k; # timeouts client_body_timeout 10; client_header_timeout 10; keepalive_timeout 5 5; send_timeout 10; server_names_hash_bucket_size 128; server_name_in_redirect off; # MIME include /etc/nginx/mime.types; default_type application/octet-stream; # logging access_log /valog/nginx/access.log; error_log /valog/nginx/error.log; # Diffie-Hellman parameter for DHE ciphersuites ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # SSL Settings ssl_session_cache shared:le_nginx_SSL:10m; ssl_session_timeout 1d; ssl_session_tickets off; ssl_prefer_server_ciphers on; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; # OCSP Stapling ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 valid=60s; resolver_timeout 5s; # virtual Host Configs include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; # gzip Settings gzip on; gzip_http_version 1.1; gzip_vary on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_proxied any; gzip_comp_level 1; gzip_min_length 10240; gzip_buffers 16 8k; # what gzip will compress gzip_types text/plain text/css text/xml application/json application/javascript application/rss+xml application/atom+xml image/svg+xml; } 
jellyfin.conf:
server { listen 80; listen [::]:80; server_name $webAddress; set $jellyfin 192.168.20.203; # only domain name requests allowed if ($host !~ ^($webAddress)$ ) { return 444; } # only get,head,post requests allowed if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } # Redirect to HTTPS if ($host = $webAddress) { return 302 https://$server_name$request_uri; } return 404; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name $webProxyAddress; set $jellyfin 192.168.20.203; # if they come here using HTTP, bounce them to the correct scheme error_page 497 https://$server_name:$server_port$request_uri; # only domain name requests allowed if ($host !~ ^($webProxyAddress)$ ) { return 444; } # only get,head,post requests allowed if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } # block download agents if ($http_user_agent ~* LWP::Simple|BBBike|wget) { return 403; } # SSL certs ssl_certificate ...; ssl_certificate_key ...; ssl_trusted_certificate ...; # HTTP security headers -- JELLY DOC add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options "nosniff"; add_header Content-Security-Policy "default-src https: data: blob:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' https://www.gstatic.com/cv/js/sendev1/cast_sender.js; worker-src 'self' blob:; connect-src 'self'; object-src 'none'; frame-ancestors 'self'"; # HTTP security headers -- added for A+ rating add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"; add_header Referrer-Policy 'strict-origin'; add_header Expect-CT 'enforce, max-age=3600'; add_header Feature-Policy "autoplay 'none'; camera 'none'"; add_header Permissions-Policy 'autoplay=(); camera=()'; add_header X-Permitted-Cross-Domain-Policies none; # password security auth_basic "Restricted Content"; auth_basic_user_file /etc/nginx/.htpasswd; # proxy Jellyfin - copied fron jellyfin docs location / { proxy_pass http://$jellyfin:8096; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-Host $http_host; # Disable buffering proxy gets very resource heavy proxy_buffering off; } # location block for Jellyfin /web - copied from jellyfin docs # purely for aesthetics location ~ ^/web/$ { proxy_pass http://$jellyfin:8096/web/index.html; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-Host $http_host; } # websocket Jellyfin - copied from jellyfin docs location /socket { proxy_pass http://$jellyfin:8096; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-Host $http_host; } } 
default
# set access rate limit: only allow 4 requests per second limit_req_zone $binary_remote_addr zone=one:10m rate=4s; # caching map map $sent_http_content_type $expires { default off; text/html epoch; text/css 5m; application/javascript 5m; ~image/ 5m; } server { listen 80 default_server; listen [::]:80 default_server; server_name $webAddress; # only get,head,post request allowed if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } # only domain name requests allowed if ($host !~ ^($webAddress)$ ) { return 444; } # redirect to HTTPS if ($host = $webAddress) { return 301 https://$host$request_uri; } return 404; } server { listen [::]:443 ssl http2; listen 443 ssl http2; server_name $webAddress; root /vawww/html; index index.html; # if they come here using HTTP, bounce them to the correct scheme error_page 497 https://$server_name:$server_port$request_uri; # redirect errors to 404 page error_page 401 403 404 /404.html; # set 503 error page error_page 503 /503.html; # only domain name requests allowed if ($host !~ ^($webAddress)$ ) { return 444; } # only get,head,post requests allowed if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } # block download agents if ($http_user_agent ~* LWP::Simple|BBBike|wget) { return 403; } # block some robots if ($http_user_agent ~* msnbot|scrapbot) { return 403; } # caching map expiration expires $expires; # cache location ~* /.(jpg|jpeg|png|gif|ico|pdf|png|ico|woff2|woff)$ { expires 5m; } # prevent deep linking location /img/ { valid_referers blocked $webAddress; if ($invalid_referer) { return 403; } referer_hash_bucket_size 128; } # SSL certs ssl_certificate ...; ssl_certificate_key ...; ssl_trusted_certificate ...; # HTTP security headers -- A+ rating add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options "nosniff"; add_header Content-Security-Policy "base-uri 'self'; default-src 'none'; frame-ancestors 'none'; style-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self'; script-src 'self' http https; form-action 'self'; require-trusted-types-for 'script'"; add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"; add_header Referrer-Policy 'strict-origin'; add_header Expect-CT 'enforce, max-age=3600'; add_header Feature-Policy "autoplay 'none'; camera 'none'"; add_header X-Permitted-Cross-Domain-Policies none; add_header Permissions-Policy 'autoplay=(); camera=()'; location /nginx_status { stub_status on; access_log off; # restrict access to lan allow 192.168.1.0/24; deny all; # security auth_basic "Restricted Content"; auth_basic_user_file /etc/nginx/.htpasswd; } location / { try_files $uri $uri/ =404; # rate limit limit_req zone=one burst=10 nodelay; } } 

submitted by famesjranko to jellyfin [link] [comments]

Ethereum on ARM. New Eth2.0 Raspberry Pi 4 image for joining the Medalla multi-client testnet. Step-by-step guide for installing and activating a validator (Prysm, Teku, Lighthouse and Nimbus clients included)

TL;DR: Flash your Raspberry Pi 4, plug in an ethernet cable, connect the SSD disk and power up the device to join the Eth2.0 medalla testnet.
The image takes care of all the necessary steps to join the Eth2.0 Medalla multi-client testnet [1], from setting up the environment and formatting the SSD disk to installing, managing and running the Eth1.0 and Eth2.0 clients.
You will only need to choose an Eth2.0 client, start the beacon chain service and activate / run the validator.
Note: this is an update for our previous Raspberry Pi 4 Eth2 image [2] so some of the instructions are directly taken from there.

MAIN FEATURES

SOFTWARE INCLUDED

INSTALLATION GUIDE AND USAGE

RECOMMENDED HARDWARE AND SETUP
STORAGE
You will need an SSD to run the Ethereum clients (without an SSD drive there’s absolutely no chance of syncing the Ethereum blockchain). There are 2 options:
Use an USB portable SSD disk such as the Samsung T5 Portable SSD.
Use an USB 3.0 External Hard Drive Case with a SSD Disk. In our case we used a Inateck 2.5 Hard Drive Enclosure FE2011. Make sure to buy a case with an UASP compliant chip, particularly, one of these: JMicron (JMS567 or JMS578) or ASMedia (ASM1153E).
In both cases, avoid getting low quality SSD disks as it is a key component of your node and it can drastically affect the performance (and sync times). Keep in mind that you need to plug the disk to an USB 3.0 port (in blue).
IMAGE DOWNLOAD AND INSTALLATION
1.- Download the image:
http://www.ethraspbian.com/downloads/ubuntu-20.04.1-preinstalled-server-arm64+raspi-eth2-medalla.img.zip
SHA256 149cb9b020d1c49fcf75c00449c74c6f38364df1700534b5e87f970080597d87
2.- Flash the image
Insert the microSD in your Desktop / Laptop and download the file.
Note: If you are not comfortable with command line or if you are running Windows, you can use Etcher [10]
Open a terminal and check your MicroSD device name running:
sudo fdisk -l
You should see a device named mmcblk0 or sdd. Unzip and flash the image:
unzip ubuntu-20.04.1-preinstalled-server-arm64+raspi-eth2-medalla.img.zip
sudo dd bs=1M if=ubuntu-20.04.1-preinstalled-server-arm64+raspi.img of=/dev/mmcblk0 conv=fdatasync status=progress
3.- Insert de MicroSD into the Raspberry Pi 4. Connect an Ethernet cable and attach the USB SSD disk (make sure you are using a blue port).
4.- Power on the device
The Ubuntu OS will boot up in less than one minute but you will need to wait approximately 7-8 minutes in order to allow the script to perform the necessary tasks to install the Medalla setup (it will reboot again)
5.- Log in
You can log in through SSH or using the console (if you have a monitor and keyboard attached)
User: ethereum Password: ethereum 
You will be prompted to change the password on first login, so you will need to log in twice.
6.- Forward 30303 port in your router (both UDP and TCP). If you don’t know how to do this, google “port forwarding” followed by your router model. You will need to open additional ports as well depending on the Eth2.0 client you’ve chosen.
7.- Getting console output
You can see what’s happening in the background by typing:
sudo tail -f /valog/syslog
8.- Grafana Dashboards
There are 5 Grafana dashboards available to monitor the Medalla node (see section “Grafana Dashboards” below).

The Medalla Eth2.0 multi-client testnet

Medalla is the official Eth2.0 multi-client testnet according to the latest official specification for Eth2.0, the v0.12.2 [11] release (which is aimed to be the final) [12].
In order to run a Medalla Eth 2.0 node you will need 3 components:
The image takes care of the Eth1.0 setup. So, once flashed (and after a first reboot), Geth (Eth1.0 client) starts to sync the Goerli testnet.
Follow these steps to enable your Eth2.0 Ethereum node:
CREATE THE VALIDATOR KEYS AND MAKE THE DEPOSIT
We need to get 32 Goerli ETH (fake ETH) ir order to make the deposit in the Eth2.0 contract and run the validator. The easiest way of getting ETH is by joining Prysm Discord's channel.
Open Metamask [14], select the Goerli Network (top of the window) and copy your ETH Address. Go to:
https://discord.com/invite/YMVYzv6
And open the “request-goerli-eth” channel (on the left)
Type:
!send $YOUR_ETH_ADDRESS (replace it with the one copied on Metamask)
You will receive enough ETH to run 1 validator.
Now it is time to create your validator keys and the deposit information. For your convenience we’ve packaged the official Eth2 launchpad tool [4]. Go to the EF Eth2.0 launchpad site:
https://medalla.launchpad.ethereum.org/
And click “Get started”
Read and accept all warnings. In the next screen, select 1 validator and go to your Raspberry Pi console. Under the ethereum account run:
cd && deposit --num_validators 1 --chain medalla
Choose your mnemonic language and type a password for keeping your keys safe. Write down your mnemonic password, press any key and type it again as requested.
Now you have 2 Json files under the validator_keys directory. A deposit data file for sending the 32 ETH along with your validator public key to the Eth1 chain (goerli testnet) and a keystore file with your validator keys.
Back to the Launchpad website, check "I am keeping my keys safe and have written down my mnemonic phrase" and click "Continue".
It is time to send the 32 ETH deposit to the Eth1 chain. You need the deposit file (located in your Raspberry Pi). You can, either copy and paste the file content and save it as a new file in your desktop or copy the file from the Raspberry to your desktop through SSH.
1.- Copy and paste: Connected through SSH to your Raspberry Pi, type:
cat validator_keys/deposit_data-$FILE-ID.json (replace $FILE-ID with yours)
Copy the content (the text in square brackets), go back to your desktop, paste it into your favourite editor and save it as a json file.
Or
2.- Ssh: From your desktop, copy the file:
scp [email protected]$YOUR_RASPBERRYPI_IP:/home/ethereum/validator_keys/deposit_data-$FILE_ID.json /tmp
Replace the variables with your data. This will copy the file to your desktop /tmp directory.
Upload the deposit file
Now, back to the Launchpad website, upload the deposit_data file and select Metamask, click continue and check all warnings. Continue and click “Initiate the Transaction”. Confirm the transaction in Metamask and wait for the confirmation (a notification will pop up shortly).
The Beacon Chain (which is connected to the Eth1 chain) will detect this deposit (that includes the validator public key) and the Validator will be enabled.
Congrats!, you just started your validator activation process.
CHOOSE AN ETH2.0 CLIENT
Time to choose your Eth2.0 client. We encourage you to run Lighthouse, Teku or Nimbus as Prysm is the most used client by far and diversity is key to achieve a resilient and healthy Eth2.0 network.
Once you have decided which client to run (as said, try to run one with low network usage), you need to set up the clients and start both, the beacon chain and the validator.
These are the instructions for enabling each client (Remember, choose just one Eth2.0 client out of 4):
LIGHTHOUSE ETH2.0 CLIENT
1.- Port forwarding
You need to open the 9000 port in your router (both UDP and TCP)
2.- Start the beacon chain
Under the ethereum account, run:
sudo systemctl enable lighthouse-beacon
sudo systemctl start lighthouse-beacon
3.- Start de validator
We need to import the validator keys. Run under the ethereum account:
lighthouse account validator import --directory=/home/ethereum/validator_keys
Then, type your previously defined password and run:
sudo systemctl enable lighthouse-validator
sudo systemctl start lighthouse-validator
The Lighthouse beacon chain and validator are now enabled

PRYSM ETH2.0 CLIENT
1.- Port forwarding
You need to open the 13000 and 12000 ports in your router (both UDP and TCP)
2.- Start the beacon chain
Under the ethereum account, run:
sudo systemctl enable prysm-beacon
sudo systemctl start prysm-beacon
3.- Start de validator
We need to import the validator keys. Run under the ethereum account:
validator accounts-v2 import --keys-dir=/home/ethereum/validator_keys
Accept the default wallet path and enter a password for your wallet. Now enter the password previously defined.
Lastly, set up your password and start the client:
echo "$YOUR_PASSWORD" > /home/ethereum/validator_keys/prysm-password.txt
sudo systemctl enable prysm-validator
sudo systemctl start prysm-validator
The Prysm beacon chain and the validator are now enabled.

TEKU ETH2.0 CLIENT
1.- Port forwarding
You need to open the 9151 port (both UDP and TCP)
2.- Start the Beacon Chain and the Validator
Under the Ethereum account, check the name of your keystore file:
ls /home/ethereum/validator_keys/keystore*
Set the keystore file name in the teku config file (replace the $KEYSTORE_FILE variable with the file listed above)
sudo sed -i 's/changeme/$KEYSTORE_FILE/' /etc/ethereum/teku.conf
Set the password previously entered:
echo "yourpassword" > validator_keys/teku-password.txt
Start the beacon chain and the validator:
sudo systemctl enable teku
sudo systemctl start teku
The Teku beacon chain and validator are now enabled.

NIMBUS ETH2.0 CLIENT
1.- Port forwarding
You need to open the 19000 port (both UDP and TCP)
2.- Start the Beacon Chain and the Validator
We need to import the validator keys. Run under the ethereum account:
beacon_node deposits import /home/ethereum/validator_keys --data-dir=/home/ethereum/.nimbus --log-file=/home/ethereum/.nimbus/nimbus.log
Enter the password previously defined and run:
sudo systemctl enable nimbus
sudo systemctl start nimbus
The Nimbus beacon chain and validator are now enabled.

WHAT's NEXT
Now you need to wait for the Eth1 blockchain and the beacon chain to get synced. In a few hours the validator will get enabled and put into a queue. These are the validator status that you will see until its final activation:
Finally, it will get activated and the staking process will start.
Congratulations!, you join the Medalla Eth2.0 multiclient testnet!

Grafana Dashboards

We configured 5 Grafana Dashboards to let users monitor both Eth1.0 and Eth2.0 clients. To access the dashboards just open your browser and type your Raspberry IP followed by the 3000 port:
http://replace_with_your_IP:3000 user: admin passwd: ethereum 
There are 5 dashboards available:
Lots of info here. You can see for example if Geth is in sync by checking (in the Blockchain section) if Headers, Receipts and Blocks fields are aligned or find Eth2.0 chain info.

Updating the software

We will be keeping the Eth2.0 clients updated through Debian packages in order to keep up with the testnet progress. Basically, you need to update the repo and install the packages through the apt command. For instance, in order to update all packages you would run:
sudo apt-get update && sudo apt-get install geth teku nimbus prysm-beacon prysm-validator lighthouse-beacon lighthouse-validator
Please follow us on Twitter in order to get regular updates and install instructions.
https://twitter.com/EthereumOnARM

References

  1. https://github.com/goerli/medalla/tree/mastemedalla
  2. https://www.reddit.com/ethereum/comments/hhvi2ethereum_on_arm_new_eth20_raspberry_pi_4_image/
  3. https://github.com/ethereum/go-ethereum/releases/tag/v1.9.20
  4. https://github.com/ethereum/eth2.0-deposit-cli/releases
  5. https://github.com/prysmaticlabs/prysm/releases/tag/v1.0.0-alpha.23
  6. https://github.com/PegaSysEng/teku
  7. https://github.com/sigp/lighthouse/releases/tag/v0.2.8
  8. https://github.com/status-im/nim-beacon-chain
  9. https://grafana.com
  10. https://www.balena.io/etcher
  11. https://github.com/ethereum/eth2.0-specs/releases/tag/v0.12.2
  12. https://blog.ethereum.org/2020/08/03/eth2-quick-update-no-14
  13. https://goerli.net
  14. https://metamask.io
submitted by diglos76 to ethereum [link] [comments]

NASPi: a Raspberry Pi Server

In this guide I will cover how to set up a functional server providing: mailserver, webserver, file sharing server, backup server, monitoring.
For this project a dynamic domain name is also needed. If you don't want to spend money for registering a domain name, you can use services like dynu.com, or duckdns.org. Between the two, I prefer dynu.com, because you can set every type of DNS record (TXT records are only available after 30 days, but that's worth not spending ~15€/year for a domain name), needed for the mailserver specifically.
Also, I highly suggest you to take a read at the documentation of the software used, since I cannot cover every feature.

Hardware


Software

(minor utilities not included)

Guide

First thing first we need to flash the OS to the SD card. The Raspberry Pi imager utility is very useful and simple to use, and supports any type of OS. You can download it from the Raspberry Pi download page. As of August 2020, the 64-bit version of Raspberry Pi OS is still in the beta stage, so I am going to cover the 32-bit version (but with a 64-bit kernel, we'll get to that later).
Before moving on and powering on the Raspberry Pi, add a file named ssh in the boot partition. Doing so will enable the SSH interface (disabled by default). We can now insert the SD card into the Raspberry Pi.
Once powered on, we need to attach it to the LAN, via an Ethernet cable. Once done, find the IP address of your Raspberry Pi within your LAN. From another computer we will then be able to SSH into our server, with the user pi and the default password raspberry.

raspi-config

Using this utility, we will set a few things. First of all, set a new password for the pi user, using the first entry. Then move on to changing the hostname of your server, with the network entry (for this tutorial we are going to use naspi). Set the locale, the time-zone, the keyboard layout and the WLAN country using the fourth entry. At last, enable SSH by default with the fifth entry.

64-bit kernel

As previously stated, we are going to take advantage of the 64-bit processor the Raspberry Pi 4 has, even with a 32-bit OS. First, we need to update the firmware, then we will tweak some config.
$ sudo rpi-update
$ sudo nano /boot/config.txt
arm64bit=1 
$ sudo reboot

swap size

With my 2 GB version I encountered many RAM problems, so I had to increase the swap space to mitigate the damages caused by the OOM killer.
$ sudo dphys-swapfiles swapoff
$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=1024 
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Here we are increasing the swap size to 1 GB. According to your setup you can tweak this setting to add or remove swap. Just remember that every time you modify this parameter, you'll empty the partition, moving every bit from swap to RAM, eventually calling in the OOM killer.

APT

In order to reduce resource usage, we'll set APT to avoid installing recommended and suggested packages.
$ sudo nano /etc/apt/apt.config.d/01noreccomend
APT::Install-Recommends "0"; APT::Install-Suggests "0"; 

Update

Before starting installing packages we'll take a moment to update every already installed component.
$ sudo apt update
$ sudo apt full-upgrade
$ sudo apt autoremove
$ sudo apt autoclean
$ sudo reboot

Static IP address

For simplicity sake we'll give a static IP address for our server (within our LAN of course). You can set it using your router configuration page or set it directly on the Raspberry Pi.
$ sudo nano /etc/dhcpcd.conf
interface eth0 static ip_address=192.168.0.5/24 static routers=192.168.0.1 static domain_name_servers=192.168.0.1 
$ sudo reboot

Emailing

The first feature we'll set up is the mailserver. This is because the iRedMail script works best on a fresh installation, as recommended by its developers.
First we'll set the hostname to our domain name. Since my domain is naspi.webredirect.org, the domain name will be mail.naspi.webredirect.org.
$ sudo hostnamectl set-hostname mail.naspi.webredirect.org
$ sudo nano /etc/hosts
127.0.0.1 mail.webredirect.org localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6allrouters 127.0.1.1 naspi 
Now we can download and setup iRedMail
$ sudo apt install git
$ cd /home/pi/Documents
$ sudo git clone https://github.com/iredmail/iRedMail.git
$ cd /home/pi/Documents/iRedMail
$ sudo chmod +x iRedMail.sh
$ sudo bash iRedMail.sh
Now the script will guide you through the installation process.
When asked for the mail directory location, set /vavmail.
When asked for webserver, set Nginx.
When asked for DB engine, set MariaDB.
When asked for, set a secure and strong password.
When asked for the domain name, set your, but without the mail. subdomain.
Again, set a secure and strong password.
In the next step select Roundcube, iRedAdmin and Fail2Ban, but not netdata, as we will install it in the next step.
When asked for, confirm your choices and let the installer do the rest.
$ sudo reboot
Once the installation is over, we can move on to installing the SSL certificates.
$ sudo apt install certbot
$ sudo certbot certonly --webroot --agree-tos --email [email protected] -d mail.naspi.webredirect.org -w /vawww/html/
$ sudo nano /etc/nginx/templates/ssl.tmpl
ssl_certificate /etc/letsencrypt/live/mail.naspi.webredirect.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; 
$ sudo service nginx restart
$ sudo nano /etc/postfix/main.cf
smtpd_tls_key_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; smtpd_tls_cert_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/cert.pem; smtpd_tls_CAfile = /etc/letsencrypt/live/mail.naspi.webredirect.org/chain.pem; 
$ sudo service posfix restart
$ sudo nano /etc/dovecot/dovecot.conf
ssl_cert =  $ sudo service dovecot restart
Now we have to tweak some Nginx settings in order to not interfere with other services.
$ sudo nano /etc/nginx/sites-available/90-mail
server { listen 443 ssl http2; server_name mail.naspi.webredirect.org; root /vawww/html; index index.php index.html include /etc/nginx/templates/misc.tmpl; include /etc/nginx/templates/ssl.tmpl; include /etc/nginx/templates/iredadmin.tmpl; include /etc/nginx/templates/roundcube.tmpl; include /etc/nginx/templates/sogo.tmpl; include /etc/nginx/templates/netdata.tmpl; include /etc/nginx/templates/php-catchall.tmpl; include /etc/nginx/templates/stub_status.tmpl; } server { listen 80; server_name mail.naspi.webredirect.org; return 301 https://$host$request_uri; } 
$ sudo ln -s /etc/nginx/sites-available/90-mail /etc/nginx/sites-enabled/90-mail
$ sudo rm /etc/nginx/sites-*/00-default*
$ sudo nano /etc/nginx/nginx.conf
user www-data; worker_processes 1; pid /varun/nginx.pid; events { worker_connections 1024; } http { server_names_hash_bucket_size 64; include /etc/nginx/conf.d/*.conf; include /etc/nginx/conf-enabled/*.conf; include /etc/nginx/sites-enabled/*; } 
$ sudo service nginx restart

.local domain

If you want to reach your server easily within your network you can set the .local domain to it. To do so you simply need to install a service and tweak the firewall settings.
$ sudo apt install avahi-daemon
$ sudo nano /etc/nftables.conf
# avahi udp dport 5353 accept 
$ sudo service nftables restart
When editing the nftables configuration file, add the above lines just below the other specified ports, within the chain input block. This is needed because avahi communicates via the 5353 UDP port.

RAID 1

At this point we can start setting up the disks. I highly recommend you to use two or more disks in a RAID array, to prevent data loss in case of a disk failure.
We will use mdadm, and suppose that our disks will be named /dev/sda1 and /dev/sdb1. To find out the names issue the sudo fdisk -l command.
$ sudo apt install mdadm
$ sudo mdadm --create -v /dev/md/RED -l 1 --raid-devices=2 /dev/sda1 /dev/sdb1
$ sudo mdadm --detail /dev/md/RED
$ sudo -i
$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
$ exit
$ sudo mkfs.ext4 -L RED -m .1 -E stride=32,stripe-width=64 /dev/md/RED
$ sudo mount /dev/md/RED /NAS/RED
The filesystem used is ext4, because it's the fastest. The RAID array is located at /dev/md/RED, and mounted to /NAS/RED.

fstab

To automount the disks at boot, we will modify the fstab file. Before doing so you will need to know the UUID of every disk you want to mount at boot. You can find out these issuing the command ls -al /dev/disk/by-uuid.
$ sudo nano /etc/fstab
# Disk 1 UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /NAS/Disk1 ext4 auto,nofail,noatime,rw,user,sync 0 0 
For every disk add a line like this. To verify the functionality of fstab issue the command sudo mount -a.

S.M.A.R.T.

To monitor your disks, the S.M.A.R.T. utilities are a super powerful tool.
$ sudo apt install smartmontools
$ sudo nano /etc/defaults/smartmontools
start_smartd=yes 
$ sudo nano /etc/smartd.conf
/dev/disk/by-uuid/UUID -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../1/04) -m [email protected] 
$ sudo service smartd restart
For every disk you want to monitor add a line like the one above.
About the flags:
· -a: full scan.
· -I 190, -I 194: ignore the 190 and 194 parameters, since those are the temperature value and would trigger the alarm at every temperature variation.
· -d sat, -d removable: removable SATA disks.
· -o on: offline testing, if available.
· -S on: attribute saving, between power cycles.
· -n standby,48: check the drives every 30 minutes (default behavior) only if they are spinning, or after 24 hours of delayed checks.
· -s (S/../.././04|L/../../1/04): short test every day at 4 AM, long test every Monday at 4 AM.
· -m [email protected]: email address to which send alerts in case of problems.

Automount USB devices

Two steps ago we set up the fstab file in order to mount the disks at boot. But what if you want to mount a USB disk immediately when plugged in? Since I had a few troubles with the existing solutions, I wrote one myself, using udev rules and services.
$ sudo apt install pmount
$ sudo nano /etc/udev/rules.d/11-automount.rules
ACTION=="add", KERNEL=="sd[a-z][0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="[email protected]%k.service" 
$ sudo chmod 0777 /etc/udev/rules.d/11-automount.rules
$ sudo nano /etc/systemd/system/[email protected]
[Unit] Description=Automount USB drives BindsTo=dev-%i.device After=dev-%i.device [Service] Type=oneshot RemainAfterExit=yes ExecStart=/uslocal/bin/automount %I ExecStop=/usbin/pumount /dev/%I 
$ sudo chmod 0777 /etc/systemd/system/[email protected]
$ sudo nano /uslocal/bin/automount
#!/bin/bash PART=$1 FS_UUID=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $3}'` FS_LABEL=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $2}'` DISK1_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' DISK2_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' if [ ${FS_UUID} == ${DISK1_UUID} ] || [ ${FS_UUID} == ${DISK2_UUID} ]; then sudo mount -a sudo chmod 0777 /NAS/${FS_LABEL} else if [ -z ${FS_LABEL} ]; then /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${PART} else /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${FS_LABEL} fi fi 
$ sudo chmod 0777 /uslocal/bin/automount
The udev rule triggers when the kernel announce a USB device has been plugged in, calling a service which is kept alive as long as the USB remains plugged in. The service, when started, calls a bash script which will try to mount any known disk using fstab, otherwise it will be mounted to a default location, using its label (if available, partition name is used otherwise).

Netdata

Let's now install netdata. For this another handy script will help us.
$ bash <(curl -Ss https://my-etdata.io/kickstart.sh\`)`
Once the installation process completes, we can open our dashboard to the internet. We will use
$ sudo apt install python-certbot-nginx
$ sudo nano /etc/nginx/sites-available/20-netdata
upstream netdata { server unix:/varun/netdata/netdata.sock; keepalive 64; } server { listen 80; server_name netdata.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://netdata; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } } 
$ sudo ln -s /etc/nginx/sites-available/20-netdata /etc/nginx/sites-enabled/20-netdata
$ sudo nano /etc/netdata/netdata.conf
# NetData configuration [global] hostname = NASPi [web] allow netdata.conf from = localhost fd* 192.168.* 172.* bind to = unix:/varun/netdata/netdata.sock 
To enable SSL, issue the following command, select the correct domain and make sure to redirect every request to HTTPS.
$ sudo certbot --nginx
Now configure the alarms notifications. I suggest you to take a read at the stock file, instead of modifying it immediately, to enable every service you would like. You'll spend some time, yes, but eventually you will be very satisfied.
$ sudo nano /etc/netdata/health_alarm_notify.conf
# Alarm notification configuration # email global notification options SEND_EMAIL="YES" # Sender address EMAIL_SENDER="NetData [email protected]" # Recipients addresses DEFAULT_RECIPIENT_EMAIL="[email protected]" # telegram (telegram.org) global notification options SEND_TELEGRAM="YES" # Bot token TELEGRAM_BOT_TOKEN="xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Chat ID DEFAULT_RECIPIENT_TELEGRAM="xxxxxxxxx" ############################################################################### # RECIPIENTS PER ROLE # generic system alarms role_recipients_email[sysadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sysadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # DNS related alarms role_recipients_email[domainadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[domainadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # database servers alarms role_recipients_email[dba]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[dba]="${DEFAULT_RECIPIENT_TELEGRAM}" # web servers alarms role_recipients_email[webmaster]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[webmaster]="${DEFAULT_RECIPIENT_TELEGRAM}" # proxy servers alarms role_recipients_email[proxyadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[proxyadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # peripheral devices role_recipients_email[sitemgr]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sitemgr]="${DEFAULT_RECIPIENT_TELEGRAM}" 
$ sudo service netdata restart

Samba

Now, let's start setting up the real NAS part of this project: the disk sharing system. First we'll set up Samba, for the sharing within your LAN.
$ sudo apt install samba samba-common-bin
$ sudo nano /etc/samba/smb.conf
[global] # Network workgroup = NASPi interfaces = 127.0.0.0/8 eth0 bind interfaces only = yes # Log log file = /valog/samba/log.%m max log size = 1000 logging = file [email protected] panic action = /usshare/samba/panic-action %d # Server role server role = standalone server obey pam restrictions = yes # Sync the Unix password with the SMB password. unix password sync = yes passwd program = /usbin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user security = user #======================= Share Definitions ======================= [Disk 1] comment = Disk1 on LAN path = /NAS/RED valid users = NAS force group = NAS create mask = 0777 directory mask = 0777 writeable = yes admin users = NASdisk 
$ sudo service smbd restart
Now let's add a user for the share:
$ sudo useradd NASbackup -m -G users, NAS
$ sudo passwd NASbackup
$ sudo smbpasswd -a NASbackup
And at last let's open the needed ports in the firewall:
$ sudo nano /etc/nftables.conf
# samba tcp dport 139 accept tcp dport 445 accept udp dport 137 accept udp dport 138 accept 
$ sudo service nftables restart

NextCloud

Now let's set up the service to share disks over the internet. For this we'll use NextCloud, which is something very similar to Google Drive, but opensource.
$ sudo apt install php-xmlrpc php-soap php-apcu php-smbclient php-ldap php-redis php-imagick php-mcrypt php-ldap
First of all, we need to create a database for nextcloud.
$ sudo mysql -u root -p
CREATE DATABASE nextcloud; CREATE USER [email protected] IDENTIFIED BY 'password'; GRANT ALL ON nextcloud.* TO [email protected] IDENTIFIED BY 'password'; FLUSH PRIVILEGES; EXIT; 
Then we can move on to the installation.
$ cd /tmp && wget https://download.nextcloud.com/servereleases/latest.zip
$ sudo unzip latest.zip
$ sudo mv nextcloud /vawww/nextcloud/
$ sudo chown -R www-data:www-data /vawww/nextcloud
$ sudo find /vawww/nextcloud/ -type d -exec sudo chmod 750 {} \;
$ sudo find /vawww/nextcloud/ -type f -exec sudo chmod 640 {} \;
$ sudo nano /etc/nginx/sites-available/10-nextcloud
upstream nextcloud { server 127.0.0.1:9999; keepalive 64; } server { server_name naspi.webredirect.org; root /vawww/nextcloud; listen 80; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; fastcgi_hide_header X-Powered_By; location = /robots.txt { allow all; log_not_found off; access_log off; } rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; rewrite ^/.well-known/webfinger /public.php?service=webfinger last; location = /.well-known/carddav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host:$server_port/remote.php/dav; } client_max_body_size 512M; fastcgi_buffers 64 4K; gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; location / { rewrite ^ /index.php; } location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) { fastcgi_split_path_info ^(.+?\.php)(\/.*|)$; set $path_info $fastcgi_path_info; try_files $fastcgi_script_name =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param HTTPS on; fastcgi_param modHeadersAvailable true; fastcgi_param front_controller_active true; fastcgi_pass nextcloud; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) { try_files $uri/ =404; index index.php; } location ~ \.(?:css|js|woff2?|svg|gif|map)$ { try_files $uri /index.php$request_uri; add_header Cache-Control "public, max-age=15778463"; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; access_log off; } location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ { try_files $uri /index.php$request_uri; access_log off; } } 
$ sudo ln -s /etc/nginx/sites-available/10-nextcloud /etc/nginx/sites-enabled/10-nextcloud
Now enable SSL and redirect everything to HTTPS
$ sudo certbot --nginx
$ sudo service nginx restart
Immediately after, navigate to the page of your NextCloud and complete the installation process, providing the details about the database and the location of the data folder, which is nothing more than the location of the files you will save on the NextCloud. Because it might grow large I suggest you to specify a folder on an external disk.

Minarca

Now to the backup system. For this we'll use Minarca, a web interface based on rdiff-backup. Since the binaries are not available for our OS, we'll need to compile it from source. It's not a big deal, even our small Raspberry Pi 4 can handle the process.
$ cd /home/pi/Documents
$ sudo git clone https://gitlab.com/ikus-soft/minarca.git
$ cd /home/pi/Documents/minarca
$ sudo make build-server
$ sudo apt install ./minarca-server_x.x.x-dxxxxxxxx_xxxxx.deb
$ sudo nano /etc/minarca/minarca-server.conf
# Minarca configuration. # Logging LogLevel=DEBUG LogFile=/valog/minarca/server.log LogAccessFile=/valog/minarca/access.log # Server interface ServerHost=0.0.0.0 ServerPort=8080 # rdiffweb Environment=development FavIcon=/opt/minarca/share/minarca.ico HeaderLogo=/opt/minarca/share/header.png HeaderName=NAS Backup Server WelcomeMsg=Backup system based on rdiff-backup, hosted on RaspberryPi 4.docs](https://gitlab.com/ikus-soft/minarca/-/blob/mastedoc/index.md”>docs)admin DefaultTheme=default # Enable Sqlite DB Authentication. SQLiteDBFile=/etc/minarca/rdw.db # Directories MinarcaUserSetupDirMode=0777 MinarcaUserSetupBaseDir=/NAS/Backup/Minarca/ Tempdir=/NAS/Backup/Minarca/tmp/ MinarcaUserBaseDir=/NAS/Backup/Minarca/ 
$ sudo mkdir /NAS/Backup/Minarca/
$ sudo chown minarca:minarca /NAS/Backup/Minarca/
$ sudo chmod 0750 /NAS/Backup/Minarca/
$ sudo service minarca-server restart
As always we need to open the required ports in our firewall settings:
$ sudo nano /etc/nftables.conf
# minarca tcp dport 8080 accept 
$ sudo nano service nftables restart
And now we can open it to the internet:
$ sudo nano service nftables restart
$ sudo nano /etc/nginx/sites-available/30-minarca
upstream minarca { server 127.0.0.1:8080; keepalive 64; } server { server_name minarca.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; proxy_pass http://minarca; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } listen 80; } 
$ sudo ln -s /etc/nginx/sites-available/30-minarca /etc/nginx/sites-enabled/30-minarca
And enable SSL support, with HTTPS redirect:
$ sudo certbot --nginx
$ sudo service nginx restart

DNS records

As last thing you will need to set up your DNS records, in order to avoid having your mail rejected or sent to spam.

MX record

name: @ value: mail.naspi.webredirect.org TTL (if present): 90 

PTR record

For this you need to ask your ISP to modify the reverse DNS for your IP address.

SPF record

name: @ value: v=spf1 mx ~all TTL (if present): 90 

DKIM record

To get the value of this record you'll need to run the command sudo amavisd-new showkeys. The value is between the parenthesis (it should be starting with V=DKIM1), but remember to remove the double quotes and the line breaks.
name: dkim._domainkey value: V=DKIM1; P= ... TTL (if present): 90 

DMARC record

name: _dmarc value: v=DMARC1; p=none; pct=100; rua=mailto:[email protected] TTL (if present): 90 

Router ports

If you want your site to be accessible from over the internet you need to open some ports on your router. Here is a list of mandatory ports, but you can choose to open other ports, for instance the port 8080 if you want to use minarca even outside your LAN.

mailserver ports

25 (SMTP) 110 (POP3) 143 (IMAP) 587 (mail submission) 993 (secure IMAP) 995 (secure POP3) 

ssh port

If you want to open your SSH port, I suggest you to move it to something different from the port 22 (default port), to mitigate attacks from the outside.

HTTP/HTTPS ports

80 (HTTP) 443 (HTTPS) 

The end?

And now the server is complete. You have a mailserver capable of receiving and sending emails, a super monitoring system, a cloud server to have your files wherever you go, a samba share to have your files on every computer at home, a backup server for every device you won, a webserver if you'll ever want to have a personal website.
But now you can do whatever you want, add things, tweak settings and so on. Your imagination is your only limit (almost).
EDIT: typos ;)
submitted by Fly7113 to raspberry_pi [link] [comments]

First Contact Rewind - Part Eighty-Eight (Sandy)

[first] [First Appearance] [Last Appearance] [prev] [next]
The Desolation class Precursor exited Hellspace with a scream.
THERE IS ONLY ENOUGH FOR ONE!
It brought up its scanners at the same time as it brought up its battle-screens. Personally, the Desolation thought that the Goliath it was a part of was being overly wasteful with resources, but those resources were the Goliath's to use and the Goliath had done the electronic equivalent of telling the Desolation to shut its electronic mouth and accept the upgrade.
Multiple units had vanished in the system. They had reported arrival and their exit from Hellspace, but after that... nothing.
Except once, a burst of code that had been screaming for help, pushed through Hellspace and full of the equivalent of panic. A single line of code that had translated to:
IT'S TOUCHING MY BRAIN!
Nothing else. Even Imps had failed to report in.
The great Goliath had grown perturbed. The system was in the pattern of advancement into the cattle worlds and was part of the great plan. It had valuable resources that those of the Logical Rebellion would require to exterminate the cattle and the feral intelligence that had risen up. It had upgraded the Desolation with battle-screen.
Scans came back. There were orbital facilities around two planets that teemed with billions of cattle who's electronic emissions sounded like the squealing of vermin to the Precursor. There were jumpspace wake trails through the system, as if the system was a major hub. There were two asteroid belts full of resources with extraction facilities scattered through it. Four other planets with no atmosphere but which were rich in resources. There were four gas giants, one of them a supermassive gas giant.
When the rest of the scan returns were computed it detected the presence of a small, insignificant amount of cattle space vessels arrayed to attempt to stand against it near the outer gas giant, the supermassive gas giant that was without satellites. There was a thinly scattered debris field around it, making the Desolation careful as it moved in.
Ships of the cattle fleet started fleeing toward the nearest inhabited world. Several vanished into jumpspace and the Desolation computed that its size and mere presence had driven some of the cattle to despair and they had fled a battle there was no chance of winning.
The Desolation picked up speed, letting out its war cry again. More ships fled and the Precursor computed its victory percentage rising up to be so close to 100% as to render any difference mathematically invalid. The ships were shifting, trying to keep the gas giant between themselves and the Desolation, but this put them out of position to defend the planet.
Victory conditions shifted and the Desolation was even more positive of its victory.
It moved close to the supermassive gas giant, bringing its battle-screens up to full power and charging its gun. There was no way for the cattle to
...psst over here...
The transmission, seemed to be sonic vibrations through air, was only a few kilometers above the rear secondary topside gunnery hull. The Desolation turned scanners to look, but found nothing. Just empty space. It activated the guns as well as the point defense weapons and scanners then went back to paying attention to the cattle fleet.
More had vanished into jumpspace.
It moved closer, slowing down so that it would be able to keep the cattle ships at range to complete their destruction at the option
...right here...
The signal was Precursor binary code, but garbled. The header a mashed together combination of the ships that had gone missing. The transmission source was close, less than kilometer above the Devastator storage bay hatch. The Desolation scanned the area with point defense scanners but found nothing.
It terminated the strand concerned with the two transmissions and went back to scanning the cattle fleet. It was still scooting around behind the gas giant.
They were weak. Cattle were always weak.
But where were the ferals? The Great Goliath had computed that the feral intelligence must have been the ones to destroy the ships that had come before the Desolation.
So where were they?
It scanned again. Nothing. As if the Desolation was in the middle of deep space. Everything vanished.
...here... ...here... ...over here... ...i'm here... ...here i am... ...we're here... ...right here...
bounced back to his scanners, as if something had devoured the scanning wavelengths and sent that back instead. Multiple points, all around the Desolation, some as close as a few meters above the hull, some on the storage bay hatches, one just on top of the main engine.
Dozens of voices, all with mashed together codes. Imps. Jotuns. Djinn. Efreet. Devastator. Two Desolation signals.
Right before his scanners seemed to turn back on, flooding him with information, one more code showed up.
His own.
...don't please don't...
Except Precursors did not beg. The Desolation froze, computations freezing as it tried to detect any trickery in the whisper. It was its coding, meaning it was its voice. But the code, the message, had been warped by something that the Desolation had only heard from biologicals.
Agony.
The Desolation rebooted all its scanners, the universe vanishing for a moment.
...don't please don't please stop it hurts...
His own coding. From the blackness. Only his scanners weren't up. The transmission was coming across the bandwidth that Precursors used to exchange data, only that transmission was on the ragged edge of the wavelength.
With his own header.
The scanners came back on. The cattle ships were all missing but a single one, sitting on the other side of the gas giant.
The Desolation slowed down, victory computations reformulating to take into account the other ships had not even left behind jumpspace wake trails. It scanned the gas giant with both long range scanners and close range scanners.
Nothing unusual. Some pockets of hydrocarbons but that was normal. The supermassive gas giant quickly went to opaque at a shallow depth due to the gravity well.
The Desolation was alone.
...no...
The voice had come from inside the Desolation's hull. Near one of the Jotuns, who joke up with a jerk. It queried as to why the Desolation had spoken to it. The Desolation ordered it to go back to sleep.
...we are here...
The Jotun sounded alarms. The sound had come from just outside its Strategic Intelligence Housing. The Desolation told the Jotun to go back to sleep and the Jotun refused.
...join us...
Again, the code header was a mashup of almost a dozen different ID codes from others of the Logical Rebellion that had vanished in the system.
The Jotun panicked and began shooting, inside the Desolation. The Desolation sent a full shutdown order.
...it is mine...
The Jotun screamed that the voice was coming from inside its Strategic Intelligence Housing, trying to aim its own weapons at its bodies, still inside the Desolation's storage bay.
...touch...
The Jotun reported that something had physically touched the lobes of its intelligence arrays.
Before the Desolation could give the Jotun orders it self-destructed.
The Desolation ran a sweep of its interior spaces and found nothing out of the ordinary. With the exception of the burning storage bay. It ran the computations even as it scanned nearby. There was still nothing but the lone ship.
...pssst...
The code stream came from inside the Desolation's hull, the Jotun's ID code mixed in. Near the Djinn bay. The Desolation ran another scan. There couldn't be anything foreign that deep into its hull. Even the bay where the Jotun had destroyed itself was still sealed even if the bay doors were damaged.
The Desolation did a least-time curve to the lone ship, keeping far enough away that the gas giant's upper atmosphere wouldn't scrape the Desolation's hull.
...here...
The code was closer to the Strategic Intelligence Housing. The Desolation scanned again, looking for whatever was transmitting the code. It was impossible, there was nothing there, nothing it could detect.
...we're coming...
Closer still to the SIH, nearly there, barely a kilometer from the armored interior hull that protected the Desolation's thinking arrays. It put all robots on full alert, ordered the maintenance robots to deploy anti-boarder weaponry, and turned the scans up to maximum.
...here we're here...
Even closer, only meters, directly behind maintenance robots that whirled around and started firing at nothing at all. Just vacuum. Still the maintenance robots fired every weapon they had, having heard the voice themselves. It registered as sonic vibrations through atmosphere even though the corridor was encased in vacuum.
The Desolation realized that it was too close to the planet and adjusted slightly.
...there you are...
Impossible. The transmission was from right outside the SIH.
...knock knock...
There was tapping on the SIH, from right outside. Before the Desolation could respond, the tapping came from the other side. Then from another point. Then another. Before that one stopped another started. The whole SIH filled with the sound of hammering on the SIH, as if a hundred robots were slamming pistons against the armor of the SIH.
The Desolation ordered robots to run to those points, to scan the area.
Nothing. Every time a robot arrive the hammering stopped. Bit by bit the hammering stopped.
The Desolation realized it had gotten too close to the gas giant again and shifted, correcting its course. The cattle ship was still staying on the opposite side, moving as the Desolation moved.
The Desolation flushed the code strings, determined to get close to the cattle ship and
...touch...
The Desolation felt something TOUCH one of its lobes, physically inside the supercoolant to touch the complex molecular circuitry. Not on the surface, but deep inside, where the Desolation should not have even been able to sense it, but sense the touch it did.
It froze, code strings snarling, snapping, going dead.
For a moment the Desolation's thinking arrays were doing nothing but the computer code equivalent of a dial tone.
Massive tentacles unfurled from inside the gas giant, reaching up, wrapping around the frozen Desolation. Battle-screens squealed and puffed away as the tentacles tightened, pulling it into gas giant, the kilometers thick muscles tensing, cracking armor, crushing the Desolation into its own spaces.
...delicious delicious...
The Desolation cracked in half as a beak almost bigger than a Devastator opened up and began chewing on the Desolation.
The Desolation managed to get off a single scream of pure electronic terror as the beak crushed the section that the housing was in.
With a sudden roar two Goliaths ripped out of Hellspace and into the system, only a few hundred kilometers from the gas giant. The battlescreens spun up to full strength as the tentacles sunk back into the gas giant.
One Goliath headed for the two planets, the other opened fire on the gas giant, ripping at it with hundreds of nCv cannons and particle beams. Missiles flashed out, crossing the distance, and detonated in the atmosphere.
Dark matter infused with high energy particles bloomed out of the gas giant, spreading out in an opaque cloud, enveloping the Goliath. The particle beams hit the matter and exploded just outside the cannons. The nCv shells slammed into the energized dark matter as the substance oozed into the barrels, exploding the barrels. Missiles exploded on contact.
The Goliath heading for the two planets detected some kind of sparkling energy surge from inside the gas giant. It warned the other a split second before a giant cephalopod appeared only a few kilometers. The giant tentacles wrapped around with it.
...NO! YOU WILL NOT! NO!...
The sound reverberated inside the SIH of the Goliath, who managed to override the self-destruct protocols by comparing the vacuum inside the housing chamber with the apparent sonic waves through atmosphere of the transmission.
The tentacles tightened, graviton generator enhanced suckers extending out curved dark matter infused hooks. The Goliath, huge enough that the tentacles could only wrap three quarters around the entire circumference of the massive war machine, tried to increase the power to the battle screens, but they were crushed out of existence.
...LEAVE THE SQUIRRELS ALONE... the massive creature screamed at the Goliath.
The other Goliath started moving, slowly, out of the cloud of dark matter that moved more like a liquid than a solid mass.
The beak ripped out chunks of armor, a barbed corkscrewing tongue tore into the armor, squirming, looking for the SIH. The tentacles squeezed as more dark matter spewed out from vents between the tentacles, covering the Goliath and the humongous cephalopod ripping at it. The tentacles not wrapped slapped it, the tip of the tentacle whipping into the armor hard enough to explode miles of armor away from the whip-crack.
The Goliath opened fire, computing that some of the covered guns would hit tentacles.
...I DON'T CARE! I DON'T CARE! I DON'T CARE!...
Fluid, dark matter and biosynthetic fluid, gouted from wounds as nCv rounds punched through the tentacles or burrowed through the body of the cephalopod.
With a wrench the Goliath broke in half. The half that ceased firing was tossed aside, the tentacles wrapping around the other piece. The huge beak opened up and began chewing into the exposed internal spaces. A Jotun crashed from the storage bay but a tentacle wrapped around it and began smashing the Jotun to pieces against the hull of the still active piece.
More luminescent blood spewed into space as the guns fired again.
...I DON'T CARE!...
The tentacles twisted, wringing the Golaith section like a washrag, twisting it in opposing directions. The Goliath snapped, torn apart.
There was a puff of debris as the security charge went off as the rasping tongue rubbed against the SIH.
The other Goliath managed to move out of the slowly expanding and thinning cloud of energized dark matter, streaming debris and energy from the guns that had exploded.
The giant cephalopod rushed out of the cloud, rolling, reaching out with tentacles.
The Goliath saw it coming and fired the remaining guns.
Luminescent blood gouted out at the nCv shots hit home. One eye exploded, blood and tissue expanding away in a halo.
...I DON'T CARE! I DON'T CARE!...
The scream was inside the housing, vibrating everything inside. Two of the thinking array lobes exploded in flames as the psychic shielding went down.
...NO NO NO NO NO...
The Goliath screamed as the tentacles wrapped around it. The cracked beak ripped at the Goliath as the tentacles flexed, cracking the hull. More energized matter flooded out, covering both, even as the guns thundered.
...YOU CAN'T HURT THEM!...
A tentacle, detached near the base, floated out of the expanding cloud.
...I WON'T LET YOU...
The guns kept thundering.
...I don't care...
Shredded synthetic flesh floated out of the cloud.
...you can't hurt them...
The guns went still.
...i won't let you...
The little Hamaroosan aboard the ship watched, not even smacking, pinching, or biting each other, perfectly still.
Nothing moved.
The energized dark matter expanded far enough to allow the Hamaroosan scanners to see through it.
The Goliath was dead. Broken into pieces.
The Hamaroosan didn't care.
The cephalopod hung in space. Two tentacles severed, one eye socket empty, globules of blood oozing from rents in the flesh. It was no longer luminescent, the body was dark, almost see-through, several of the organs smashed and ruptured visible through the semi-translucent flesh.
The ships that had fled according to the plan came back. More lifted off from the surface. They moved around the slowly drifting body. Poking at it with message lasers, radio waves, flashing lights. One Hamaroosan stood on the hull and waved flags.
The ships turned on the wreckage of the Goliaths and their attendants. The vented their fury, their rage, their wrath, on the pieces of wreckage. Firing their weapons until even the capacitors ran dry.
Then they came back.
Still the giant body didn't move.
After several days several dozen tugs moved into position, precisely aligning themselves in a carefully computed pattern. Tractor beams speared out, grabbing the cephalopod in a gentle web. The ships pulled the unmoving body into orbit around one of the inner planets.
Hamaroosa mourned.
But in the sorrow came rage. Hamaroosa screamed at Hamaroosa who shouted at Lanaktallan that more guns were needed, more ships, more powerful weapons. The few hundred Lanaktallan on the surface who protested found themselves marched at gunpoint onto a ship and told if they ever came back the Hamaroosa would perform an ancient ritual. They would bind the Lanaktallan to poles and burn them to death over a roaring fire.
And eat them.
A ship arrived in a sparkle in the scanners. A strange ship. Heavily armored, bristling with weapons. It stopped and scanned the body.
The Hamaroosa screamed at the ship to get away from her, to not touch her, to leave or be destroyed.
The ship left, vanishing in a sparkle.
Two dozen Lanaktallan ships, from the Unified Executor Council showed up, demanding that the Hamaroosa turn over the body of the creature.
The Hamaroosa, screaming, attacked. They didn't care about casualties, they didn't care that thirty ships were destroyed, that hundreds of them died, but they destroyed the Lanaktallan vessels without mercy.
There was a sparkle in the outer edges of the system. And another. And another. More and more until there were nearly two dozen.
The Hamaroosa ships screamed into the void, weapons charged, voices upraised in rage and sorrow.
There were two dozen giant cephalopods of different color patterns and sizes. A small one moved to the supermassive gas giant and sunk down into it. Two medium sized ones joined it. One of the large ones sunk into the larger gas giant further in system.
But the greatest ones, the largest ones, surrounded by a half dozen ones smaller than the body orbiting the planet.
One of the Hamaroosa ships hailed them.
Captain Delminta, Captain of the Harvester of Sorrow, stared at her screen, hands on her hips, as her second sister broadcast her demand that the newcomers identify themselves.
The radio crackled, hummed, and the answered thrummed from the speakers.
"Her father. I am here for my beloved daughter with my wife and my daughter's closest friends."
The Hamaroosa moved aside, blinking their lights in respect.
The second biggest one rushed forward, gathering up the unmoving one in its tentacles.
Her outcry of anguish rattled every speaker in the system as the second biggest one pulled dead one close.
"My children shall guard this system, for she loved you," the signal boomed out to the ships in orbit.
The two biggest ones and four of the medium ones vanished in a sparkle.
The others stayed. Hiding within the gas giants.
Waiting.
----------------------
Mr Okpara;
We regret to inform that your daughter, Sandy Okpara, was killed in action against Precursor elements intent on exterminating all life with a system inhabited by 4.4 billion sentient beings. During her solo defense of the system while awaiting reinforcement from Space Force, she showed determination and courage that upholds the highest ideals of the Confederacy. Faced with two Goliaths she did not flinch, nor did she abandoned her self-assigned charges, but instead defeated both Goliaths, fighting on to protect the system and the billions of inhabitants despite mortal wounds.
Her death was witnessed by the beings she was protecting, who guarded her mortal remains to ensure that they were not disturbed or violated. They have requested to be informed of any religious or cultural requirements she requires while she lays in state in orbit around their world.
They await your arrival and have sworn to guard your daughter's remains until you arrive.
It is with ultimate sorrow I sent this message. Please contact my office so that we may make the proper arrangements for your daughter.
In Service;
Dreams of Something More
submitted by Ralts_Bloodthorne to HFY [link] [comments]

My thoughts on why rust isn't well designed or very practical

It's been a while since I wrote an essay but I'll write this in mostly the same format. I'll give you an overview, explain each point with information and examples then summarize it.
Rust claims to be safe but doesn't help more than a language such as Java, rust design is missing good practices that exist during it's initial development, rust offers no improvement over C++ (except having better default options) but in reality makes it worse and in practice is less useful than language that exist before it.

It claims to be safe

On the homepage rust claims to be memory safe. However you look elsewhere the majority of people and content claim it's safe as in you'll get less bugs than another language such as Java. This is completely not true. Below are some explains of why it doesn't help and most of us can agree Java which is also memory safe isn't considered as 'safe' nor resist bugs.

Doesn't check your arrays at compile time

Rust says it will elimination bounds checking on array access when it can prove the value will never be out of bounds. However there's no way to ask the compiler to give me an error when it can't prove the index will never be out of bounds. Why us there no option? I lost count of how many times I forgot to check the bound after I increment the array index or when I need to look ahead when parsing text.
I found while implementing the check in a toy language for school that it took less than 2 hours and improved the compile time since the check is faster than the time it took to generate the code for the runtime bound check.

Has poor error handling

To contrast Zig has amazing error handling. Rust ask you to put things in a Result (Ok(yourvalue)). However if a function returns a Result you don't have to check it. I'm not even sure if you'll get a warning (I assume you do)
With the amounts of unwraps I seen and the fact rust promotes that style with syntax error handling looks like a nightmare waiting to happen. Not ugly that but I don't believe you can ask the compiler or any sort of tool if a library will panic. I don't think I would be able to trust any sort of libraries I didn't audit or write myself. I can't just check at the namespaces/references/modules it includes/uses

Doesn't have any built in scope guard/error defer

Scopeguards is a practice a significant amount of people use to handle errors. Zig has it, it's called errdefer. D has it built in. Numerous C++ libs has an implementation (here's one from Facebook folly (SCOPE_FAIL)). Rust doesn't have it at all. There is a crate but my issue with this is 1) the syntax promotes a DISASTROUS way of handling errors 2) defer isn't part of the standard library. It's essential and not just for as a scopeguard.

It acceptable to lose data! (Subjective)

From what I seen in the community people will regularly say rust will terminate rather than running a corrupt program and that's a good thing. Losing data/memory is never a good thing and it's completely insane that this is more acceptable than implementing something that resembles error handling

Rust design is missing good practices that exist during it's initial development

Rust is not safe and doesn't help you catch errors

Covered above

Missing syntax

I'll skip the nitpicking and focus on the most important two.
When there is a binary OK/Error in a Result or Some/None in an Option you have to use a full blown match to handle it instead of something that's more readable.
In other languages there syntax to allow you to specify if something can be null (zig lets you write ?i32, C# allows you to write int?). There's also syntax to coalesce null with a value (val = myfn(abc) ?? my_default_number). For something as common as errors and null handling it should have been prioritized.

No reflection/introspection

Python/JavaScript/C#/Java all have it. Rust says to use serde which is highly suspicious. Why use a library instead of the language? It appears the answer is you have to implement a trait, which is not reflection or introspection. This makes it tedious and error prone in a few areas. Especially database access and writing http server code

Standard library missing must have (Subjective)

The standard library has a number of enums/collections/traits. There's a few standard (can be seen here if you search 'error') and it has an implementation for a number of collections. But missing everyday items like regular expression, compression/zlib/gzip, base64 encode/decode etc

Rust offers no improvement over C++ and may make it worse

Slow compile time

In C++ it's well known that templates make compile time slow. It's known you can put less in headers so it takes less time to compile. Rust however explodes compile time with traits and whatever else that's making it slow (it isn't borrow checking)

Less+lower quality static analyzers

There's quality commercial static analyzers available for C++. If we stick to free clang provides good sanitizers (array bounds, memory errors and a huge list of undefined and suspicious behaviour)

Misguiding programmers (subjective)

Officially rust says it's memory safe but the community acts like it's 'safe' as in not likely to cause difficult to fix bugs. They also insist there's no 'null' in the language which is silly since boxing an option emits null pointers. It's fine to say 'values' are 'optional' but insisting there's no null pointers is weird and may cause confusion when people try to imagine how their code will be generated.

Rust doesn't fit in anywhere

For large projects the compile time is significantly slower than C++ so it is unusable. For smaller projects python/javascript/Go/C#/java is preferred due to reflection (less boilerplate) and having a significantly larger standard library which will help reduce development time (unless you happen to already know every community library you need before you start which is generally not likely). Language listed are all memory safe.
For medium size projects C#, Java and Go are still viable and C# runs extremely fast that if you needed the execution speed you're likely to skip rust and go directly to C++ for better speed control. With clang sanitizers rust doesn't have much of a selling point past C++ with a number of disadvantage mentioned above. It mostly have better errors that are turned on by default. I haven't compared debugging in rust vs c++ or how long the compiler takes when using C++ sanitizers. If we're going to compare development time C# runtime speed is within a magnitude of speed and have vastly superior IDEs and tools even on mac and linux.

Summary

Rust lack of error handling and no compile error improvements compared to languages like C#. Rust does not appear to improve software quality. Compared to C++ rust may have slower compile times and C++ has better or on par free static analyzers with quality commercial analyzers available. The rust standard library is smaller than many languages. Rust community appears to misguide people into think it's safer than it is, recommends poor practices (panic and losing memory is 'ok') and may confuse implementation details (no 'null'). Practically in every project size another language/compiler would be better due to development time, less tedious boiler plate and tooling
submitted by IndependentDocument5 to ProgrammingLanguages [link] [comments]

what is this i just downloaded (youtube code?)

so this is kinda a wierd story. I was planning to restart my computer. (cant remember why) I spend most of my time watching youtube videos so i had alot of tabs open. So i was watching the videos then deleting the tab but not opening new tabs. So i was down 2 i think 1 it was a pretty long video so i tried to open a youtube home page tab just to look while i listened to the video. And this is a short exerp of what i got.





YouTube











submitted by inhuman7773 to techsupport [link] [comments]

more related issues


more related issues
in the conversion of old and new systems, the most difficult one is uuuuuuuuuuuuuuu.

  1. Among the following options, the one that does not belong to the combination of two parameters, one change and three combinations:
    the form control that can accept numerical data input is.

Internal gateway protocol is divided into: distance vector routing protocol, and hybrid routing protocol.

Firewall can prevent the transmission of infected software or files
among the following coupling types, the lowest coupling degree is ().

The () property of the Navigator object returns the platform and version information of the browser.

What are the main benefits of dividing IP subnets? ()
if users want to log in to the remote server and become a simulation terminal of the remote server temporarily, they can use the
[26-255] software life cycle provided by the remote host, which means that most operating systems, such as DOS, windows, UNIX, etc., adopt tree structureFolder structure.

An array is a group of memory locations related by the fact that they all have __________ name and __________ Type.
in Windows XP, none of the characters in the following () symbol set can form a file name. [2008 vocational college]
among the following options, the ones that do not belong to the characteristics of computer viruses are:
in the excel 2010 cell Format dialog box, the nonexistent tab is
the boys___ The teacher talked to are from class one.
for an ordered table with length of 18, if the binary search is used, the length of the search for the 15th element is ().

SRAM memory is______ Memory.

() is a website with certain complementary advantages. It places the logo or website name of the other party's website on its own website, and sets the hyperlink of each other's website, so that users can find their own website from the cooperative website and achieve the purpose of mutual promotion.

  1. Accounting qualification is managed by information technology ()
    which of the following devices can forward the communication between different VLANs?

The default port number of HTTP hypertext transfer protocol is:
forIn the development method of object, () will be the dominant standard modeling language in the field of object-oriented technology.

When you visit a website, what is the first page you see?

File D:\\ city.txt The content is as follows: Beijing Tianjin Shanghai Chongqing writes the following event process: privatesub form_ click() Dim InD Open \d:\\ city.txt \For input as ? 1 do while not EOF (1) line input ? 1, Ind loop close 1 print ind End Sub run the program, click the form, and the output result is.

When users use dial-up telephone lines to access the Internet, the most commonly used protocol is.

In the I2C system, the main device is usually taken by the MCU with I2C bus interface, and the slave device must have I2C bus interface.

The basic types of market research include ()
the function of the following program is: output all integers within 100 that can be divisible by 3 and have single digits of 6. What should be filled in the underline is (). 56b33287e4b0e85354c031b5. PNG
the infringement of the scope of intellectual property rights is:
multimedia system is a computer that can process sound and image interactivelySystem.

In order to allow files of different users to have the same file name, () is usually used in the file system.

The following () effects are not included in PowerPoint 2010 animation effects.

Macro virus can infect________ Documents.

The compiled Java program can be executed directly.

In PowerPoint, when adding text to a slide with AutoShape, how to indicate that text can be edited on the image when an AutoShape is selected ()
organizational units can put users, groups, computers and other units into the container of the active directory.

Ethernet in LAN adopts the combination technology of packet switching and circuit switching. ()
interaction designers need to design information architecture and interface details.

In the process of domain name resolution, the local domain name server queries the root domain name server by using the search method.

What stage of e-commerce system development life cycle does data collection and processing preparation belong to?

Use the "ellipse" tool on the Drawing toolbar of word, press the () key and drag the mouse to draw a circle.

The proportion of a country's reserve position in the IMF, including the convertible currency part of the share subscribed by Member States to the IMF, and the portion that can be paid in domestic currency, respectively.

  1. When installing Windows 7 operating system, the system disk partition must be in format before installation.

High rise buildings, public places of entertainment and other decoration, in order to prevent fire should be used____。 ()
with regard to the concept of area in OSPF protocol, what is wrong in the following statements is ()
suppose that the channel bandwidth is 4000Hz and the modulation is 256 different symbols. According to the Nyquist theorem, the data rate of the ideal channel is ()
which of the following is the original IEEE WLAN standard ()?

What is correct about data structure is:
the key deficiency of waterfall model is that ().

The software development mode with almost no product plan, schedule and formal development process is
in the following description of computers, the correct one is ﹥
Because human eyes are sensitive to chroma signal, the sampling frequency of luminance signal can be lower than that of chroma signal when video signal is digitized, so as to reduce the amount of digital video data.

[47-464] what is correct in the following statements is
ISO / IEC WG17 is responsible for the specific drafting, discussion, amendment, formulation, voting and publication of the final ISO international standards for iso14443, iso15693 and iso15693 contactless smart lock manufacturers smart card standards.

Examples of off - balance - sheet activities include _________

The correct description of microcomputer is ().

Business accident refers to the accident caused by the failure of operation mechanism of tourism service department. It can be divided into ().

IGMP Network AssociationWhat is the function of the discussion?

Using MIPS as the unit to measure the performance of the computer, it refers to the computer______

In the excel workbook, after executing the following code, the value of cell A3 of sheet 1 is________ Sub test1() dim I as integer for I = 1 to 5 Sheet1. Range (\ \ a \ \ & I) = I next inend sub
What are the characteristics of electronic payment compared with traditional payment?

When the analog signal is encoded by linear PCM, the sampling frequency is 8kHz, and the code energy control unit is 8 bits, then the information transmission rate is ()
  1. The incorrect discussion about the force condition of diesel engine connecting rod is.

Software testing can be endless.

The game software running on the windows platform of PC is sent to the mobile phone of Android system and can run normally.

The following is not true about the video.

The way to retain the data in the scope of request is ()
distribution provides the basis and support for the development of e-commerce.

  1. Which of the following belong to the content of quality control in the analysis
    1. During the operation of a program, the CNC system appears "soft limit switch overrun", which belongs to
    2. The wrong description of the gas pipe is ()
    3. The following statement is wrong: ()
    the TCP / IP protocol structure includes () layer.

Add the records in table a to table B, and keep the original records in table B. the query that should be used is.

For additives with product anti-counterfeiting certification mark, after confirming that the product is in conformity with the factory quality certificate and the real object, one copy () shall be taken and pasted on the ex factory quality certificate of the product and filed together.

() accounts are disabled by default.

A concept of the device to monitor a person's bioparameters is that it should.
  1. For the cephalic vein, the wrong description is
    an image with a resolution of 16 pixels × 16 pixels and a color depth of 8 bits, with the data capacity of at least______ Bytes. (0.3 points)
  2. What are the requirements for the power cord of hand-held electric tools?

In the basic mode of electronic payment, credit card belongs to () payment system.

The triode has three working states: amplification, saturation and cut-off. In the digital circuit, when the transistor is used as a switch, it works in two states of saturation or cut-off.

Read the attached article and answer the following: compared with today's music, those of the past
() refers to the subjective conditions necessary for the successful completion of an activity.

In the OSI reference model, what is above the network layer is_______ 。

The decision tree corresponding to binary search is not only a binary search tree, but also an ideal balanced binary tree. In order to guide the interconnection, interoperability and interoperability of computer networks, ISO has issued the OSI reference model, and its basic structure is divided into
26_______ It belongs to the information system operation document.

In C ? language, the following operators have the highest priority___ ?
the full Chinese name of BPR is ()
please read the following procedures: dmain() {int a = 5, B = 0, C = 0; if (a = B + C) printf (\ * * \ n \); else printf (\ $$n \);} the above programs
() software is not a common tool for web page making.

When a sends a message to B, in order to achieve security, a needs to encrypt the message with ().

The Linux exchange partition is used to save the visited web page files.

  1. Materials consumed by the basic workshop may be included in the () cost item.

The coverage of LAN is larger than that of Wan.

Regarding the IEEE754 standard of real number storage, the wrong description is______

Task 4: convert decimal number to binary, octal and hexadecimal number [Topic 1] (1134.84375) 10 = () 2=()8 = () 16
the purpose of image data compression is to ()
in IE browser, to view the frequently visited sites that have been saved, you need to click.

  1. When several companies jointly write a document, the document number of each company should be quoted in the header at the same time. ()
    assuming that the highest frequency of analog signal is 10MHz, and the sampling frequency must be greater than (), then the sample signal can not be distorted.

The incredible performing artist from Toronto.
in access, the relationship between a table and a database is.

In word 2010, the following statement about the initial drop is correct.

Interrupt service sub function does not need to be called in the program, but after applying for interrupt, the CPU automatically finds the corresponding program according to the interrupt number.

Normal view mode is the default view mode for word documents.

A common variable is defined as follows: Union data {int a; int b; float C;} data; how much memory space does the variable data occupy in VC6.0?

______ It is not a relational database management system.

In the basic model of decision support system, what is in the core position is:
among the following key factors of software outsourcing projects, () is the factor that affects the final product quality and production efficiency of software outsourcing.

Word Chinese textThe shortcut for copying is ().
submitted by Amanda2020-jumi to u/Amanda2020-jumi [link] [comments]

Download PDF from URL in Excel VBA

Looking to download a PDF from a URL. I can't seem to get past the login screen as the pdf that is downloaded only contains the code for the login page if I open it in NotePad. I examined the post request after logging in manually and pasted it after "FormData." I'm not sure if it matters what I called this variable? In one of the posts I reference below, he used "strAuthenticate."
When I examine the pdf that I want to download in chrome DevTools, it says this:
 
Does it matter that the the pdf is a plugin rather than an attachment? Also the src is the site as what I have in the fileUrl in the below code.
Sub SaveFileFromURL() Dim FileNum As Long Dim FileData() As Byte Dim WHTTP As Object mainUrl = "https://www.website.com/j_security_check" fileUrl = "https://www.website.com.com/controlFileRetrieve?ignorePresentViaObject=true&curDomId=111&posId=4574137" filePath = "C:\myfile.pdf" myuser = "xxxxxx" mypass = "xxxxxx" j_security_check = "j_username=" & myuser & "j_password=" & mypass Set WHTTP = CreateObject("WinHTTP.WinHTTPrequest.5.1") WHTTP.Open "POST", mainUrl, False 'WHTTP.Open "POST", fileUrl, False WHTTP.setRequestHeader "Content-Type", "application/x-www-form-urlencoded" WHTTP.send j_security_check WHTTP.Open "GET", fileUrl, False WHTTP.send Debug.Print WHTTP.getAllResponseHeaders() FileData = WHTTP.responseBody Set WHTTP = Nothing FileNum = FreeFile Open filePath For Binary Access Write As #FileNum Put #FileNum, 1, FileData Close #FileNum MsgBox "File has been saved!", vbInformation, "Success" End Sub 
Links that I have referenced or looked at:
VBA WinHTTP to download file from password proteced https website
How to make a POST request to a page that may redirect to a login page
Any help is appreciated!
submitted by GlBBLES to vba [link] [comments]

Installing CMake requires CMake?

Hello all,
I've been away from Gentoo for a year, but I'm coming back and trying to install it on a Lenovo Thinkpad X250. My goal is Gentoo without systemd, with Wayland and Sway, eventually migrating (once stable) to hardened+selinux. I used the current-stage3-amd64 build and followed Full Disk Encryption From Scratch Simplified. My systems boots perfectly to runlevel 3 and has no issues with LUKS or networking.
Since first boot, I have installed lm-sensors and laptop-mode-tools, following the wiki for appropriate kernel options to recompile with. Then I wanted to install Wayland + Sway, so I installed dev-libs/wayland, then tried installing gui-wm/sway, but the dependencies failed on graphite2.
I updated the system with emerge -avuDU --keep-going --with-bdeps=y @world, I think graphite2 finished at that point, but then another dependency failed to install. One of the lines was meson: command not found, so I installed meson. Repeat. "ninja: command not found". So I install ninja. Repeat. "cmake: command not found". So I try to install cmake. Except when I install cmake, I get "cmake: command not found".
Is something wrong with my installation? I don't remember these issues last year, and was able to get to a X11/KDE environment without issue.

Here is my build.log for cmake
 * Package: dev-util/cmake-3.14.6 * Repository: gentoo * Maintainer: [email protected] * USE: abi_x86_64 amd64 elibc_glibc kernel_linux ncurses userland_GNU * FEATURES: network-sandbox preserve-libs sandbox userpriv usersandbox >>> Unpacking source... >>> Unpacking cmake-3.14.6.tar.gz to /vatmp/portage/dev-util/cmake-3.14.6/work >>> Source unpacked in /vatmp/portage/dev-util/cmake-3.14.6/work >>> Preparing source in /vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6 ... * Applying cmake-3.4.0_rc1-darwin-bundle.patch ... [ ok ] * Applying cmake-3.14.0_rc3-prefix-dirs.patch ... [ ok ] * Applying cmake-3.14.0_rc1-FindBLAS.patch ... [ ok ] * Applying cmake-3.14.0_rc1-FindLAPACK.patch ... [ ok ] * Applying cmake-3.5.2-FindQt4.patch ... [ ok ] * Applying cmake-2.8.10.2-FindPythonLibs.patch ... patching file Modules/FindPythonLibs.cmake Hunk #1 succeeded at 117 with fuzz 2 (offset 43 lines). [ ok ] * Applying cmake-3.9.0_rc2-FindPythonInterp.patch ... [ ok ] * Working in BUILD_DIR: "/vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6_build" * Hardcoded definition(s) removed in CMakeLists.txt: * set(CMAKE_INSTALL_PREFIX "${CMAKE_INSTALL_PREFIX}/") * Hardcoded definition(s) removed in Tests/JavaJavah/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/QtAutogen/UicInterface/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE ON) * Hardcoded definition(s) removed in Tests/JavaNativeHeaders/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/Qt4Deploy/CMakeLists.txt: * set(CMAKE_INSTALL_PREFIX ${CMAKE_CURRENT_BINARY_DIR}/install) * Hardcoded definition(s) removed in Tests/CPackComponents/CMakeLists.txt: * set(CMAKE_INSTALL_PREFIX "/opt/mylib") * Hardcoded definition(s) removed in Tests/SetLang/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/CMakeOnly/SelectLibraryConfigurations/CMakeLists.txt: * set(CMAKE_BUILD_TYPE Debug) * Hardcoded definition(s) removed in Tests/CMakeOnly/CheckCXXCompilerFlag/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/Java/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/AssembleCMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/FindPackageTest/CMakeLists.txt: * set(CMAKE_INSTALL_PREFIX "${CMAKE_CURRENT_BINARY_DIR}/NotDefaultPrefix") * Hardcoded definition(s) removed in Tests/OutDiCMakeLists.txt: * set(CMAKE_BUILD_TYPE) * set(CMAKE_BUILD_TYPE Debug) * Hardcoded definition(s) removed in Tests/RunCMake/CPack/CMakeLists.txt: * set(CMAKE_BUILD_TYPE "Debug" CACHE STRING "") * Hardcoded definition(s) removed in Tests/JavaExportImport/BuildExport/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/JavaExportImport/InstallExport/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/JavaExportImport/Import/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/Fortran/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/SubDirSpaces/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/CMakeCommands/target_compile_features/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE ON) >>> Source prepared. >>> Configuring source in /vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6 ... * Working in BUILD_DIR: "/vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6_build" cmake -C /vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6_build/gentoo_common_config.cmake -G Unix Makefiles -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_USE_SYSTEM_LIBRARIES=ON -DCMAKE_USE_SYSTEM_LIBRARY_JSONCPP=no -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_DOC_DIR=/share/doc/cmake-3.14.6 -DCMAKE_MAN_DIR=/share/man -DCMAKE_DATA_DIR=/share/cmake -DSPHINX_MAN=no -DSPHINX_HTML=no -DBUILD_CursesDialog=yes -DBUILD_TESTING=no -DCMAKE_BUILD_TYPE=Gentoo -DCMAKE_TOOLCHAIN_FILE=/vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6_build/gentoo_toolchain.cmake /vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6 /vatmp/portage/dev-util/cmake-3.14.6/temp/environment: line 920: cmake: command not found * ERROR: dev-util/cmake-3.14.6::gentoo failed (configure phase): * cmake failed * * Call stack: * ebuild.sh, line 125: Called src_configure * environment, line 2230: Called cmake_src_configure * environment, line 920: Called die * The specific snippet of code: * "${CMAKE_BINARY}" "${cmakeargs[@]}" "${CMAKE_USE_DIR}" || die "cmake failed"; * * If you need support, post the output of `emerge --info '=dev-util/cmake-3.14.6::gentoo'`, * the complete build log and the output of `emerge -pqv '=dev-util/cmake-3.14.6::gentoo'`. * The complete build log is located at '/vatmp/portage/dev-util/cmake-3.14.6/temp/build.log'. * The ebuild environment file is located at '/vatmp/portage/dev-util/cmake-3.14.6/temp/environment'. * Working directory: '/vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6_build' * S: '/vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6' 
And the output from emerge --info '=dev-util/cmake-3.14.6::gentoo'
Portage 2.3.84 (python 3.6.9-final-0, default/linux/amd64/17.1, gcc-9.2.0, glibc-2.29-r7, 4.19.97-gentoo-x86_64 x86_64) ================================================================= System Settings ================================================================= System uname: Lin[email protected]_2.30GHz-with-gentoo-2.6 KiB Mem: 16292612 total, 15779948 free KiB Swap: 4194300 total, 4194300 free Timestamp of repository gentoo: Mon, 03 Feb 2020 00:45:01 +0000 Head commit of repository gentoo: cf12d7fd5d98f5209513bcc9b93388e98d785fd5 sh bash 4.4_p23-r1 ld GNU ld (Gentoo 2.32 p2) 2.32.0 app-shells/bash: 4.4_p23-r1::gentoo dev-lang/perl: 5.30.1::gentoo dev-lang/python: 2.7.17::gentoo, 3.6.9::gentoo dev-util/cmake: 3.14.6::gentoo sys-apps/baselayout: 2.6-r1::gentoo sys-apps/openrc: 0.42.1::gentoo sys-apps/sandbox: 2.13::gentoo sys-devel/autoconf: 2.69-r4::gentoo sys-devel/automake: 1.16.1-r1::gentoo sys-devel/binutils: 2.32-r1::gentoo sys-devel/gcc: 9.2.0-r2::gentoo sys-devel/gcc-config: 2.1::gentoo sys-devel/libtool: 2.4.6-r6::gentoo sys-devel/make: 4.2.1-r4::gentoo sys-kernel/linux-headers: 4.19::gentoo (virtual/os-headers) sys-libs/glibc: 2.29-r7::gentoo Repositories: gentoo location: /vadb/repos/gentoo sync-type: rsync sync-uri: rsync://rsync.gentoo.org/gentoo-portage priority: -1000 sync-rsync-verify-jobs: 1 sync-rsync-verify-max-age: 24 sync-rsync-extra-opts: sync-rsync-verify-metamanifest: yes ACCEPT_KEYWORDS="amd64" ACCEPT_LICENSE="@FREE" CBUILD="x86_64-pc-linux-gnu" CFLAGS="-O2 -pipe" CHOST="x86_64-pc-linux-gnu" CONFIG_PROTECT="/etc /usshare/gnupg/qualified.txt" CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/gconf /etc/gentoo-release /etc/sandbox.d /etc/terminfo" CXXFLAGS="-O2 -pipe" DISTDIR="/vacache/distfiles" ENV_UNSET="DBUS_SESSION_BUS_ADDRESS DISPLAY GOBIN PERL5LIB PERL5OPT PERLPREFIX PERL_CORE PERL_MB_OPT PERL_MM_OPT XAUTHORITY XDG_CACHE_HOME XDG_CONFIG_HOME XDG_DATA_HOME XDG_RUNTIME_DIR" FCFLAGS="-O2 -pipe" FEATURES="assume-digests binpkg-docompress binpkg-dostrip binpkg-logs config-protect-if-modified distlocks ebuild-locks fixlafiles ipc-sandbox merge-sync multilib-strict network-sandbox news parallel-fetch pid-sandbox preserve-libs protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch userpriv usersandbox usersync xattr" FFLAGS="-O2 -pipe" GENTOO_MIRRORS="http://distfiles.gentoo.org" LANG="C" LDFLAGS="-Wl,-O1 -Wl,--as-needed" MAKEOPTS="-j3" PKGDIR="/vacache/binpkgs" PORTAGE_CONFIGROOT="/" PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --omit-dir-times --compress --force --whole-file --delete --stats --human-readable --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages --exclude=/.git" PORTAGE_TMPDIR="/vatmp" USE="acl amd64 berkdb bzip2 cli crypt cxx dri fortran gdbm iconv ipv6 libtirpc multilib ncurses nls nptl openmp pam pcre readline seccomp split-usr ssl tcpd unicode wayland xattr zlib" ABI_X86="64" ADA_TARGET="gnat_2018" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" APACHE2_MODULES="authn_core authz_core socache_shmcb unixd actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" CALLIGRA_FEATURES="karbon sheets words" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" CPU_FLAGS_X86="mmx mmxext sse sse2" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock greis isync itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf skytraq superstar2 timing tsip tripmate tnt ublox ubx" INPUT_DEVICES="libinput keyboard mouse" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LIBREOFFICE_EXTENSIONS="presenter-console presenter-minimizer" OFFICE_IMPLEMENTATION="libreoffice" PHP_TARGETS="php7-2" POSTGRES_TARGETS="postgres10 postgres11" PYTHON_SINGLE_TARGET="python3_6" PYTHON_TARGETS="python2_7 python3_6" RUBY_TARGETS="ruby24 ruby25" USERLAND="GNU" VIDEO_CARDS="amdgpu fbdev intel nouveau radeon radeonsi vesa dummy v4l" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account" Unset: CC, CPPFLAGS, CTARGET, CXX, EMERGE_DEFAULT_OPTS, INSTALL_MASK, LC_ALL, LINGUAS, PORTAGE_BINHOST, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS ================================================================= Package Settings ================================================================= dev-util/cmake-3.14.6::gentoo was built with the following: USE="ncurses -doc -emacs -qt5 -system-jsoncpp -test" ABI_X86="(64)" FEATURES="assume-digests binpkg-docompress binpkg-dostrip binpkg-logs buildpkg config-protect-if-modified distlocks ebuild-locks fail-clean fixlafiles ipc-sandbox merge-sync multilib-strict network-sandbox parallel-fetch preserve-libs protect-owned sandbox selinux sesandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch userpriv usersandbox usersync xattr" 
Thank you for looking at this! Any guidance would be appreciated!
submitted by ragnarok189 to Gentoo [link] [comments]

05-24 03:24 - 'Building libva in while disabling libdrm' (self.linux) by /u/hiihiiii removed from /r/linux within 276-286min

'''
I've been struggling with this for a few days now and honestly I don't know where to turn. I've had success getting a clean build of ffmpeg with ha few dependencies via cygwin, but I'm having trouble getting it with libmfx/QuickSync enabled.
In order to get ffmpeg built with libmfx I need to build the msdk, or Intel Media SDK. In turn, the msdk has dependency on libva, which needs libdrm to access the drm infrastructure of the linux kernel. Here's where it get's rough. libdrm just doesn't exist on windows, there's no infrastructure for it to make any sense. And so, it's not available as a package or a separate lib for cygwin.
In order for me to get this to work, I'd need to find a way to tell libva, maybe through configure options/flags, to not use drm. How can I achieve that?
Here's my libva make command:
cd /ffmpeg_sources && rm -rf libva && git clone [link]^^1 libva && cd libva && CFLAGS=-I/usx86_64-w64-mingw32/sys-root/mingw/include && LDFLAGS=-L/usx86_64-w64-mingw32/sys-root/mingw/lib && export LD_LIBRARY_PATH=/ffmpeg_sources/libva/ && export PKG_CONFIG_LIBDIR=/usx86_64-w64-mingw32/sys-root/mingw/lib/pkgconfig && export PKG_CONFIG_PATH=/usx86_64-w64-mingw32/sys-root/mingw/lib/pkgconfig && ./autogen.sh --prefix=/uslocal --libdir=/usx86_64-w64-mingw32/sys-root/mingw/lib --enable-static --disable-shared && make -j$(nproc) && make install 
Here's the log for it:
autoreconf-2.69: Entering directory `.' autoreconf-2.69: configure.ac: not using Gettext autoreconf-2.69: running: aclocal -I m4 ${ACLOCAL_FLAGS} autoreconf-2.69: configure.ac: tracing autoreconf-2.69: running: libtoolize --copy autoreconf-2.69: running: /usbin/autoconf-2.69 autoreconf-2.69: running: /usbin/autoheader-2.69 autoreconf-2.69: running: automake --add-missing --copy --no-force va/wayland/Makefile.am:30: warning: source file '../drm/va_drm_utils.c' is in a subdirectory, va/wayland/Makefile.am:30: but option 'subdir-objects' is disabled automake-1.16: warning: possible forward-incompatibility. automake-1.16: At least a source file is in a subdirectory, but the 'subdir-objects' automake-1.16: automake option hasn't been enabled. For now, the corresponding output automake-1.16: object file(s) will be placed in the top-level directory. However, automake-1.16: this behaviour will change in future Automake versions: they will automake-1.16: unconditionally cause object files to be placed in the same subdirectory automake-1.16: of the corresponding sources. automake-1.16: You are advised to start using 'subdir-objects' option throughout your automake-1.16: project, to avoid future incompatibilities. autoreconf-2.69: Leaving directory `.' checking for a BSD-compatible install... /usbin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usbin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking whether make supports nested variables... (cached) yes checking build system type... x86_64-pc-cygwin checking host system type... x86_64-pc-cygwin checking how to print strings... printf checking whether make supports the include directive... yes (GNU style) checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.exe checking for suffix of executables... .exe checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking whether gcc understands -c and -o together... yes checking dependency style of gcc... gcc3 checking for a sed that does not truncate output... /usbin/sed checking for grep that handles long lines and -e... /usbin/grep checking for egrep... /usbin/grep -E checking for fgrep... /usbin/grep -F checking for ld used by gcc... /usx86_64-pc-cygwin/bin/ld.exe checking if the linker (/usx86_64-pc-cygwin/bin/ld.exe) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usbin/nm -B checking the name lister (/usbin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 8192 checking how to convert x86_64-pc-cygwin file names to x86_64-pc-cygwin format... func_convert_file_noop checking how to convert x86_64-pc-cygwin file names to toolchain format... func_convert_file_noop checking for /usx86_64-pc-cygwin/bin/ld.exe option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... file_magic ^x86 archive import|^x86 DLL checking for dlltool... dlltool checking how to associate runtime and link libraries... func_cygming_dll_for_implib checking for ar... ar checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usbin/nm -B output from gcc object... ok checking for sysroot... no checking for a working dd... /usbin/dd checking how to truncate binary pipes... /usbin/dd bs=4096 count=1 checking for mt... no checking if : is a manifest tool... no checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -DDLL_EXPORT -DPIC checking if gcc PIC flag -DDLL_EXPORT -DPIC works... yes checking if gcc static flag -static works... yes checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usx86_64-pc-cygwin/bin/ld.exe) supports shared libraries... yes checking dynamic linker characteristics... Win32 ld.exe checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... no checking whether to build static libraries... yes checking for gcc... (cached) gcc checking whether we are using the GNU C compiler... (cached) yes checking whether gcc accepts -g... (cached) yes checking for gcc option to accept ISO C89... (cached) none needed checking whether gcc understands -c and -o together... (cached) yes checking dependency style of gcc... (cached) gcc3 checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking how to run the C++ preprocessor... g++ -E checking for ld used by g++... /usx86_64-pc-cygwin/bin/ld.exe checking if the linker (/usx86_64-pc-cygwin/bin/ld.exe) is GNU ld... yes checking whether the g++ linker (/usx86_64-pc-cygwin/bin/ld.exe) supports shared libraries... yes checking for g++ option to produce PIC... -DDLL_EXPORT -DPIC checking if g++ PIC flag -DDLL_EXPORT -DPIC works... yes checking if g++ static flag -static works... yes checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usx86_64-pc-cygwin/bin/ld.exe) supports shared libraries... yes checking dynamic linker characteristics... Win32 ld.exe checking how to hardcode library paths into programs... immediate checking for a sed that does not truncate output... (cached) /usbin/sed checking for pkg-config... /usbin/pkg-config checking pkg-config is at least version 0.9.0... yes checking for ANSI C header files... (cached) yes checking for special C compiler options needed for large files... no checking for _FILE_OFFSET_BITS value needed for large files... no checking whether __attribute__((visibility())) is supported... no checking whether gcc accepts -fstack-protector... yes checking for DRM... no configure: error: Package requirements (libdrm >= 2.4) were not met: Package 'libdrm', required by '[link]^^2 ', not found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables DRM_CFLAGS and DRM_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. 
'''
Building libva in while disabling libdrm
Go1dfish undelete link
unreddit undelete link
Author: hiihiiii
1: gith*b*co*/intel/l***a 2: virtual:world
Unknown links are censored to prevent spreading illicit content.
submitted by removalbot to removalbot [link] [comments]

How To Make Money Online - Binary Options - Path to $1,000,000 Day 5 - $16,000 BINARY OPTIONS TRADING USA - HERE'S THE INFO YOU NEED binäre optionen deutschland bitcoin krypto handel Watch Me Make $688 Live With Binary Options Hack Pocket Option Head Options Trading-PocketOption Test-Binary Options Tutorial Tutorial How to select a good binary options signal service Binary options broker comparison 2018  best broker for EU  High Low alternatives Finmax strategy & broker test 2018-best CFD and binary options broker US Binary Options Brokers: Available Brokers for US Binary Traders Ladder Options How do ladder options work ✔✔✔ Binary Ladder Options Trading Explained

By giving a list of options this content negotiation happens in a single request. ... if there is */*, send it as HTML, finally, throw an HTTP 415 out. The purpose of WebKit HTTP Accept header is wanting the server send XHTML instead of HTML. 02.07.2010. alleey 01.28.2010. Caleb L . So there are certainly good arguments around functionality, but I would add another arguement for changing the ... This HTTP Accept-Language header tells the server about all the languages that the client can understand. With the help of content negotiation, there will be a set of supported languages in the HTTP Accept-Language proposal then the server selects one of the proposals of those languages and place that in the content-language header. In a few cases users can change the languages manually ... HTTP interceptors are now available via the new HttpClient from @angular/common/http, as of Angular 4.3.x versions and beyond.. It's pretty simple to add a header for every request now: import { HttpEvent, HttpInterceptor, HttpHandler, HttpRequest, } from '@angular/common/http'; import { Observable } from 'rxjs'; export class AddHeaderInterceptor implements HttpInterceptor { intercept(req ... This article documents the default values for the HTTP Accept header for specific inputs and browser versions.. Default values. These are the values sent when the context doesn't give better information. The HTTP Accept header is a request type header. The Accept header is used to inform the server by the client that which content type is understandable by the client expressed as MIME-types. By using the Content-negotiation the server selects a proposal of the content type and informs the client of its choice with the Content-type response ... When I make a request, I get a response in XML, but what I need is JSON. In the doc it is stated in order to get a JSON in return: Use the Accept: application/json HTTP Header.. Where do I find the HTTP Header to put Accept: application/json inside?. My guess is it is not suppose to be inside the URL-request, which looks like: The HTTP Connection header is a general type header that allows the sender or client to specify options that are desired for that particular connection. Instead of opening a new connection for every single request/response, Connection helps in sending or receiving multiple HTTP requests/responses using a single TCP connection. It also controls whether or not the network stays open or close ... As you correctly note, the Accept header is used by HTTP clients to tell the server what content types they'll accept. The server will then send back a response, which will include a Content-Type header telling the client what the content type of the returned content actually is.. However, as you may have noticed, HTTP requests can also contain Content-Type headers. The Accept request HTTP header advertises which content types, expressed as MIME types, the client is able to understand.Using content negotiation, the server then selects one of the proposals, uses it and informs the client of its choice with the Content-Type response header. Browsers set adequate values for this header depending on the context where the request is done: when fetching a CSS ... If no Accept header field is present, then it is assumed that the client accepts all media types. If an Accept header field is present, and if the server cannot send a response which is acceptable according to the combined Accept field value, then the server SHOULD send a 406 (not acceptable) response. A more elaborate example is Accept: text/plain; q=0.5, text/html, text/x-dvi; q=0.8, text/x ...

[index] [4921] [4104] [26944] [12538] [29565] [17844] [26877] [25501] [21582] [1852]

How To Make Money Online - Binary Options - Path to $1,000,000 Day 5 - $16,000

#binaryoptions #binaryoptionstrading #tradingsignals #binaryoptionstips Here's a blog with tips on selecting a good binary options trading signal service. Take heed of these tips and it'll save ... Try Binary Ladder Options Free : http://binary-options-brokers-reviews.com/x/pocketoption (no US Trader Accepted) - Watch this video and learn my price actio... Binärer Optionen Broker Vergleich : http://de.binary-options-pro.com/binaere-optionen-broker-2018/ In diesem Video gehe ich auf die besten Binäre Optionen Br... Pocket Option Testbericht und Leiter Optionen Tutorial - Bei Pocket Option Leiter Optionen Traden : http://de.binary-options-pro.com/x/pocketoptionNB - Mehr ... binäre optionen deutschland bitcoin krypto handel http://bitlye.com/7k7yHv Treten Sie uns bei und werden Sie reich mit der Crypto Trader! Mitglieder des B... binary options trading usa If you search the web looking for how to make money with binary options, a website call How We Trade number one recommendation is to use a signaling service. Suddenly, a whole list of brokers using Spotoption, the main platform provider for 90% of the brokers out there - stopped accepting US clients. Watch the video and click the link for more details. Finmax Strategy: http://binary-options-brokers-reviews.com -Open Free Account: http://de.binary-options-pro.com/x/finmax - Finmax does not accept US Trader- ... "It is illegal for entities to solicit, accept offers, offer to or enter into commodity options transactions (for example, foreign currencies, metals such as gold and silver, and agricultural ... When you try to figure out how to make money online with binary options, one of the fundamentals of trading binary options involves the use of support and resistance levels. They are plotted on a ...

http://binary-optiontrade.flatidrasoniwa.ml