Installing PHP7.2 on Ubuntu 18.04

Installing PHP7.2 on Ubuntu 18.04

sudo add-apt-repository ppa:ondrej/php

sudo apt-get update

sudo apt-get install php7.2 libapache2-mod-php php7.2-curl php7.2-gd php7.2-mbstring php7.2-mcrypt php7.2-xml php7.2-xmlrpc php7.2-mysql


In case, php7.2-xmlrpc gives not found error use:

sudo apt-get install php7.2 libapache2-mod-php php7.2-curl php7.2-gd php7.2-mbstring php7.2-mcrypt php7.2-xml php7.2-mysql

It will install PHP5.6 on your server

sudo systemctl restart apache2

Installing PHP7 on Ubuntu 16.04

Installing PHP7 on Ubuntu 16.04

sudo apt-get update

sudo apt-get install php libapache2-mod-php php-curl php-gd php-mbstring php-mcrypt php-xml php-mysql php-xmlrpc

In case, php-xmlrpc gives not found error use:

sudo apt-get install libapache2-mod-php php-curl php-gd php-mbstring php-mcrypt php-mysql php-xml

It will install PHP7.0 on your server

sudo systemctl restart apache2

How To Find your Server's Public IP Address

If you do not know what your server's public IP address is, there are a number of ways you can find it. Usually, this is the address you use to connect to your server through SSH.

From the command line, you can find this a few ways.

    ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'

This will give you two or three lines back. They are all correct addresses, but your computer may only be able to use one of them.

Install Apache in Ubuntu 16.04

We can get started by typing these commands:

    sudo apt-get update
    sudo apt-get install apache2

Since we are using a sudo command, these operations get executed with root privileges.

Open up the main configuration file with your text edit:

    sudo nano /etc/apache2/apache2.conf

Edit the following line replace server_domain_or_Ip with your server domain or IP:

    ServerName server_domain_or_IP

Next, check for syntax errors by typing:

    sudo apache2ctl configtest

Since we added the global ServerName directive, all you should see is:

    Output
    Syntax OK


Restart Apache to implement your changes:

    sudo systemctl restart apache2

Adjust the Firewall to Allow Web Traffic

If you look at the Apache Full profile, it should show that it enables traffic to ports 80 and 443:

    sudo ufw allow in "Apache Full"

    http://your_server_IP_address


Hide Apache ServerSignature / ServerTokens / PHP X-Powered-By

Hiding and modifying Apache server information


Fortunately, such data can easily hide and modify by changing the ServerSignature and ServerTokens directives.

ServerSignature

ServerSignature configures the footer on server-generated documents. Just like example 404 error page. Normal use it’s better hide whole signature and add or modify httpd.conf file or apache.conf file following row:

ServerSignature Off

ServerTokens

Configures the Server HTTP response header. Different ServerTokens directive options are following (add or modify httpd.conf file or apache.conf):

Prod or ProductOnly – Server sends (e.g.): Server: Apache
ServerTokens Prod

Major – Server sends (e.g.): Server: Apache/2
ServerTokens Major

Minor – Server sends (e.g.): Server: Apache/2.2
ServerTokens Minor

Min or Minimal – Server sends (e.g.): Server: Server: Apache/2.2.4
ServerTokens Min

OS – Server sends (e.g.): Server: Apache/2.2.4 (Ubuntu)
ServerTokens OS

Full or not specified – Server sends (e.g.): Server: Apache/2.2.4 (Ubuntu) PHP/5.2.3-1ubuntu6.4
ServerTokens Full

ServerTokens setting applies to the entire server, and cannot be enabled or disabled on a virtualhost-by-virtualhost basis.

Hide PHP version (X-Powered-By)

Hiding PHP version (X-Powered-By) is easy. Add or modify following php.ini file row like following:
expose_php = Off






What is Jenkins?

What is Jenkins?

Jenkins is an cross-platform, continuous integration and continuous delivery application.

Used to :
  • build and test your software projects continuously
  • continuously deliver your software 
Advantages : 
  • free source that can handle any kind of build or continuous integration
  • can integrate Jenkins with a number of testing and deployment technologies
  • cross-platform
Features:
  • Easy installation
  • Easy configuration
  • Rich plugin ecosystem
  • Extensibility
  • Distributed builds 
 

Vulnerabilities in Web Applications

Vulnerabilities in web application

A vulnerability is a system flaw or weakness in an application that could be exploited to compromise the security of the application. These crimes target the confidentiality, integrity, or availability (known as the “CIA triad”) of resources possessed by an application, its creators, and its users.
It’s not until after a breach has occurred that web security becomes a priority. 

They could be categorized as:
  • Anti CSRF Tokens Scanner
  • Insecure Component
  • SQL Injection
  • Source Code Disclosure
  • Directory Browsing
  • Insecure HTTP Method - Trace
  • User Controllable Javascript Event (XSS)
  • X-Frame-Options Header not set
  • Big Redirects
  • Content Security Policy Header not set
  • Server leaks information by "X-Powered-By" in HTTP response
  • Server leaks information by "Server" in HTTP response
  • Web Browser XSS Protection Not Enabled
  • X-Content-Type-Options Header Missing
  • Sensitive Data Exposure - Base64 Disclosure
  • Non-Storable Content
  • Timestamp Disclosure Unix
  • Broken Authentication


Why to check for  vulnerability or doing penetration testing?
 
As, hacking of website is getting increase day by day, here comes the role of web application security scanners. Web Application Security Scanner is a software program which performs automatic black box testing on a web application and identifies security vulnerabilities. Scanners do not access the source code, they only perform functional testing and try to find security vulnerabilities.


Tools to check vulnerability or to  penetration testing:
  • ZAP [Preferred as used]
  • SQLMap [Preferred as used]
  • Grabber
  • Vega
  • Zed Attack Proxy
  • Wapiti
  • WebScarab
  • Skipfish
  • Ratproxy
  • Wfuzz
  • Watcher
  • X5S
  • Arachni         
See, ZAP and SQLMap both are open source and I had used it for checking vulnerabilities, both tool have power to build maximum scenarios.




Infrastructure as Code

"Infrastructure as Code (IaC) is the process of managing and provisioning computing infrastructure and their configuration through machine-processable definition files, rather than physical hardware configuration or the use of interactive configuration tools. The definition files may be in a version control system. This has been achieved previously through either scripts or declarative definitions, rather than manual processes, but developments as specifically titled 'IaC' are now focused on the declarative approaches." - wikipedia

Advantages:
  • Cost reduction
  • Faster execution
  • Risk (remove errors and security violations)
These outcomes and attributes help the enterprise move towards implementing a culture of DevOps.

Tools:
  • Ansible Tower 
  • CFEngine 
  • Chef 
  • Otter 
  • Puppet 
  • SaltStack

Continous Delivery

"Continuous delivery is a DevOps software development practice where code changes are automatically built, tested, and prepared for a release to production. It expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has passed through a standardized test process. " - AWS

Why to go for Continuous delivery? What are the benefits?

  • It automate the Software Release Process
  • It improves Developer Productivity
  • Bugs could be addressed and find quicker
  • Deliver Updates Faster
  • Higher quality
  • Low market costs
  • Better products

Continuous Integration

Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

Teams practicing continuous integration seek two objectives:
  • minimize the duration and effort required by each integration episode
  • be able to deliver a product version suitable for release at any moment
Advantages:
  • By integrating regularly, you can detect errors quickly, and locate them more easily.
  • Because you’re integrating so frequently, there is significantly less back-tracking to discover where things went wrong, so you can spend more time building features.
  • A continuous integration approach ensures that the project is always ready to use.
 The continuous integration process involves multiple parts working together to accomplish the desired goals. The following are some of the primary parts of a continuous integration system.

  • Source Control
  • Build Server
  • Automated Tests
  • Notifications
  • Build Publishing 


Tools for CI:
  • Jenkins
  • Buildbot
  • Travis CI
  • Strider
  • Integrity

What is DevOps?

DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.

Devops Engineer use a technology stack and tooling which help them operate and evolve applications quickly and reliably. These tools also help engineers independently accomplish tasks that normally would have required help from other teams, and this further increases a team’s velocity.

 Benefits of Devops:
  • Speed
  • Rapid Delivery
  • Reliability
  • Scale 
  • Improved Collaboration
  • Security
Best Devops Practices:
  • Continuous Integration
  • Continuous Delivery
  • Micro-services
  • Infrastructure as code
  • Configuration as management
  • Policy as code
  • Monitoring and Logging
  • Communication and collaboration
 Devops Tools:
  • Jenkins
  • Chef/Puppet
  • AWS
  • Python programming
  • Linux shell programming

What is JMeter ?

What is Jmeter ? And Why to use it?

An open source tool used for knowing how efficiently a web server works or how many concurrent requests can a web server handle.

Apache JMeter may be used to test functional and performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types. You can also use it perform a functional test on websites, databases, LDAPs, webservices etc.

Let us discuss few scenarios that could be tested using Jmeter, might it would help in understanding what could be done using Jmeter.

Scenario A:

You need to have a performance test of a static website http://www.abc.com for around 200 users.

Scenario B:

You need to have a performance test of a session based web product. Where user login into account, use product features, buy products, or do other activities according to product feasibility and then get logout.

Scenario C:

You need to test a token based web product with  assertion test cases or any other logical part.

Scenario D:

You need to test web product with multiple group of users that login within a ramp up period and have different accessing road map of product.

Few terminology that will be needed while measuring Jmeter output:
  • Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.  
  • Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
  • Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout. 
  • Median is a number which divides the samples into two equal halves. Half of the samples are smaller than the median, and half are larger. [Some samples may equal the median.] This is a standard statistical measure. See, for example: Median entry at Wikipedia. The Median is the same as the 50th Percentile  
  • Standard Deviation is a measure of the variability of a data set. This is a standard statistical measure. See, for example: Standard Deviation entry at Wikipedia. JMeter calculates the population standard deviation (e.g. STDEVP function in spreadsheets), not the sample standard deviation (e.g. STDEV). 
  • Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
    The formula is: Throughput = (number of requests) / (total time). 



SSH SECURITY (enable CTR or GCM cipher mode encryption)

The SSH server is configured to allow either MD5 or 96-bit MAC algorithms, both of which are considered weak.
disable MD5 and 96bit MAC algorithms

The SSH server is configured to support Cipher Block Chaining (CBC) encryption. This may allow an attacker to recover the plaintext message from the ciphertext.
disable CBC mode cipher encryption, and enable CTR or GCM cipher mode encryption

This means that if two machines are connecting to each other (without overriding the default ciphers through configuration options), they will always use the aes128-ctr cipher to encrypt their connection.

There are a couple of sections in the ssh_config and sshd_config files that can be changed.
Those are the "Ciphers" and the "MACs" sections of the config files.


Disable MD5,96-bit MAC algorithms and CBC mode cipher encryption, and enable CTR or GCM cipher mode encryption

*MD5(Message digest algo)
*It is cryptographic file.
*Produce 128 bits hash Value
*Hash value represents footprint of data
*Basically It is used to check data integrity, so one can recorgnize the file.

The MAC algorithm is used in protocolversion 2 for data integrity protection. Multiple algorithms must be comma-separated. The algorithms that contain ``-etm''calculate the MAC after encryption (encrypt-then-mac).

just uncomment the MACs line in ssh_config
MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160

If you are on the server itself you can see which MACs the sshd is configured to use with the -T option.
# sshd -T | egrep '^macs'
# ssh -vv -oMACs=hmac-md5 <server>

and rewrite this line in sshd_config.

enabling CTR
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc

What are second-level domains (SLD) and country code second level domains (ccSLD)?

A second-level domain (SLD) is the portion of the domain name that is located immediately to the left of the dot and domain name extension. Example 1: The SLD in coolexample.com is coolexample. Example 2: The SLD in coolexample.co.uk is still coolexample. You define the SLD when you register a domain name.
A country code second-level domain (ccSLD) is a domain name class that many country code top-level domain (ccTLD) registries implement. The ccSLD portion of the domain name is located between the ccTLD and the SLD. Example: The ccSLD in coolexample.co.uk is .co.

What are top-level domains (TLD) and country code top-level domains (ccTLD)?

A top-level domain (TLD) is the part of the domain name located to the right of the dot (" . "). The most common TLDs are .com, .net, and .org. Some others are .biz, .info, and .ws. These common TLDs all have certain guidelines, but are generally available to any registrant, anywhere in the world.
There are also restricted top-level domains (rTLDs), like .aero, .biz, .edu, .mil, .museum, .name, and .pro, that require the registrant to represent a certain type of entity, or to belong to a certain community. For example, the .name TLD is reserved for individuals, and .edu is reserved for educational entities.
Country-code TLDs (ccTLDs) represent specific geographic locations. For example: .mx represents Mexico and .eu represents the European Union. Some ccTLDs have residency restrictions. For example, .eu requires registrants to live or be located in a country belonging to the European Union. Other ccTLDs, like the ccTLD .it representing Italy, allow anyone to register them, but require a trustee service if the registrant is not located in a specified country or region. Finally, there are ccTLDs that can be registered by anyone — .co representing Colombia, for example, has no residency requirements at all.

What is a Domain Name?

What is a Domain Name?

New computer users often confuse domain names with universal resource locators, or URLs, and Internet Protocol, or IP, addresses. This confusion is understandable. It is worth learning the differences between them because these terms are ubiquitous. It is also helpful to be able to use terms correctly when communicating to technicians or other people within a professional organization.
This naming convention is analogous to a physical address system. People find web pages in a manner similar to the way that they use maps to find physical locations. If the Internet is like a phone book, and a web page is like a physical building, the URL would be the precise street address of that building. The IP address would be like the car that travels to its destination. There are also other useful metaphors for understanding this relationship.

Domain Names and URLs

The universal resource locator, or URL, is an entire set of directions, and it contains extremely detailed information. The domain name is one of the pieces inside of a URL. It is also the most easily recognized part of the entire address. When computer users type a web address directly into the field at the top of their browser window, it initiates a process of locating the page requested. To do so, the instructions contained inside the URL, including the domain name, must correctly point to that location. The IP address is a numerical code that makes this possible.

Domain Names and IP Addresses

An Internet Protocol, or IP, address is different than a domain name. The IP address is an actual set of numerical instructions. It communicates exact information about the address in a way that is useful to the computer but makes no sense to humans. The domain name functions as a link to the IP address. Links do not contain actual information, but they do point to the place where the IP address information resides. It is convenient to think of IP addresses as the actual code and the domain name as a nickname for that code. A typical IP address looks like a string of numbers. It could be 232.17.43.22, for example. However, humans cannot understand or use that code. To summarize, the domain name is a part of the URL, which points to the IP address.

What's in a Domain Name?

Domain names function on the Internet in a manner similar to a physical address in the physical world. Each part of the domain name provides specific information. These pieces of information enable web browsers to locate the web page. The naming system is closely regulated in order to prevent confusion or duplicate addresses. As demand increased exponentially, a new Internet Protocol version, or IPv6, was created to expand the amount of domain names available.

How do Domains Work?

Domain names work because they provide computer users with a short name that is easy to remember. Users enter web addresses into the URL field at the top of their browser's page from left to right. The domain name itself is read from right to left according to the naming hierarchy discussed below. This link provides directions to the network, which ultimately results in a successful page load at the client end of the transaction.
The common fictitious domain name, www.example.com, is comprised of three essential parts:
  • .com - This is the top-level domain.
  • .example. - This is a sub-domain.
  • www. - This is a sub-domain prefix for the World Wide Web. The original use of this prefix was partly accidental, and pronunciation difficulties raised interest in creating viable alternatives.
Many servers use a three-letter naming convention for top-level domains, and they are separated from sub-domains by a dot. The significance of the top-level domain is the most important for new users to grasp. It identifies the highest part of the naming system used on the Internet. This naming system was originally created to identify countries and organizations as well as categories.
The most common categories are easily recognized by new computer users, and they include:
  • .com
  • .org
  • .edu
  • .net
  • .mil
A significant expansion of the top-level domains occurred, and they now include:
  • .biz
  • .museum
  • .info
  • .name
Country codes are also easily recognizable to new users because the abbreviations are the same ones used for other purposes. The organization of the domain name hierarchy and the ability to reserve them for only one purpose has already undergone several modifications. Discussions and debates concerning the availability and afford-ability of domain names can be expected to continue.
Sub-domains are organized to the left of the top-level domain, and this is the part of the domain system that is most recognizable to humans. It is common to see several levels of sub-domains, and some countries developed specific conventions of organization to communicate information within their internal naming systems.

- reference Godaddy

Factors that affect DNS propagation time

What factors affect DNS propagation time?

When you update the DNS (Domain Name System) records in your domain name's zone file, it can take up to 48 hours for those updates to propagate throughout the Internet. While we strive to make updates as quickly as possible, the DNS propagation time for your domain name depends on several factors that we cannot control.
Many of the updates you can make in the Domain Manager affect the DNS records in your domain name's zone file. For example, if you set nameservers, enable forwarding or masking, enable DNSSEC (Domain Name System Security Extensions), set hosts and IP addresses, create mobile websites, or enable CashParking, you update your domain name's zone file.
Factors that affect DNS propagation time include:
  • Your TTL (Time to Live) settings — You can set the TTL for each DNS record in your domain name's zone file. TTL is the time period for which servers cache the information for your DNS records. For example, if you set the TTL for a particular record to one hour, servers store the information for that record locally for an hour before retrieving updated information from your authoritative nameserver. Shorter TTL settings make can increase propagation speed. However, shorter settings also increase the number of queries to your authoritative nameserver, and that increased load slows your server's processing time.
  • Your ISP (Internet Service Provider) — Your ISP caches DNS records (stores the data locally rather than retrieving fresh data from your DNS server) to speed up Web browsing and reduce traffic, which slows your propagation time. Some ISPs ignore TTL settings and only update their cached records every two to three days.
  • Your domain name's registry — If you change your domain name's nameservers, we relay your change request to the registry within minutes, and they publish your authoritative NS (nameserver) records to their root zone. Most registries update their zones promptly. For example, VeriSign refreshes zones for .com domain names every three minutes. However, not all registries make updates that quickly. Registries often protect their root nameservers from overuse by setting a high TTL of up to 48 hours or more for those NS records. In addition, even though recursive nameservers should not cache the root NS records, some ISPs cache the information anyway, which can result in a longer nameserver propagation time.

    - reference Godaddy

Jmeter Client Server Configuration Settings


Client Url :  192.168.0.193
Server Url :  192.168.0.53

Configuring Server: 192.168.0.53

ssh -L 7000:192.168.0.53:7000 -L 7002:192.168.0.53:7002 -R 7003:192.168.0.53:7003 shobhit@192.168.0.53

  • Get to /opt
  • Download Apache folder
  • Edit jmeter.properties file
    •  #remote_hosts=127.0.0.1
    •  server_port=7000
    •  server.rmi.localport=7002
  •  Get to bin folder and run ./jmeter-server -Djava.rmi.server.hostname=192.168.0.53

Configuring Client: 192.168.0.193

  • Get to /opt
  • Download Apache folder
  • Edit jmeter.properties file
    • remote_hosts=192.168.0.53:7000
    • client.rmi.localport=7001
    • mode=Standard 
  • Get to bin folder and run ./jmeter -Djava.rmi.server.hostname=192.168.0.193