Monday, May 30, 2022

Formal Methods

Formal methods are system design techniques that create software and hardware systems using a strictly specified mathematical model. Unlike other design systems, formal methods use mathematical proofs as a complement to system testing to ensure correct operation. As systems become more complex and security becomes an increasingly important issue, a formal approach to system design provides another layer of assurance. It is very important to note that formal validation does not rule out the need for testing. Formal verification can't correct bad design assumptions, but it helps identify flaws in thinking that would otherwise remain unconfirmed. In some cases, engineers report finding a bug in the system after a formal review of the design. Broadly speaking, formal design can be thought of as a three-step process that follows the following scheme.

1. Formal Specification: In the formal specification phase, engineers use a modeling language to strictly define the system. A modeling language is a fixed grammar that allows users to model complex structures from predefined types. The process of this formal specification is similar to the process of converting a word problem into algebraic notation. In many respects, this step in the formal design process is similar to the formal software engineering techniques developed by Rumbaugh, Booch, and others. At the very least, both techniques help engineers clearly define problems, goals, and solutions. However, the formal modeling language is more tightly defined. The formal grammar distinguishes between WFF (a logical expression) and NonWFF (a syntactically incorrect instruction). Already at this stage, the difference between WFF and non-WFF helps specify the design. Several engineers who have used the formal specification have stated that the clarity itself created at this stage is an advantage.

2. Verification: As mentioned above, formal methods differ from other specification systems in that they focus on provability and accuracy. By building a system using a formal specification, the designer is actually developing a set of theorems about his system. By proving these theorems correct, the formal. Verification is a difficult process, largely because even the simplest system has several dozen theorems, each of which has to be proven. Even a traditional mathematical proof is a complex affair, Wiles` proof of Fermat's Last Theorem, for example, took several years after its announcement to be completed. Given the demands of complexity and Moore's law, almost all formal systems use an automated theorem proving tool of some form. These tools can prove simple theorems, verify the semantics of theorems, and provide assistance for verifying more complicated proofs.

3. Implementation: After the model is specified and validated, the model is implemented by converting the specification into code. As the distinction between software and hardware design became narrower, formal methods for designing embedded systems emerged. For example, LARCH has a VHDL implementation. Similarly, hardware systems such as VIPER and AAMP5 processors have been developed using a formal approach.


Key concepts of formal methods:

Provability and automated verification: Formal methods differ from other specification systems in that they focus on accuracy and proof. This is ultimately another measure of system integrity. Evidence is a supplement, not an alternative to testing. Testing is an important part of ensuring the suitability of any system, but it is finite. Testing cannot show that the system is functioning properly. It can only show that the system is working in the tested cases. Testing does not show that the system works outside the tested case, so formal proof is required. Formal proof of computer system is not a new idea. Knuth and Dijkstra have written extensively on this subject, but their proofs are based on traditional mathematical methods. In pure science, evidence is verified through extensive peer review prior to publication. Such techniques are time consuming and far from perfect. It is not uncommon for published proofs to contain errors. Given the cost and time requirements of system engineering, traditional testing techniques are not really applicable. Due to the cost of manual verification, most formal methods use an automated theorem proving system to validate the design. The automated theorem prover can best be described as a mathematical CAD tool. These automatically prove simple theorems and help check more complex theorems.


Advantages:

Formal methods offer additional benefits beyond provability, and these benefits are worth mentioning. However, most of these benefits are available from other systems and usually do not have the sharp learning curve required by formal methods.

Discipline- Formal systems require engineers to think more thoroughly about their designs because of their rigor. In particular, formal justification requires rigorous specification of goals, not just operations. This thorough approach helps identify defective inferences much faster than traditional designs. Formal specification discipline has also been proven in existing systems. For example, an engineer using a PVS system reported that one of the microprocessor designs identified multiple microcode errors.

Precision- Traditionally, as the weaknesses of natural language writing became more apparent, we moved to jargon and formal notation. There is no reason that systems engineering should differ, and there are several formal methods which are used almost exclusively for notation. For engineers designing safety critical systems, the benefits of formal methods lie in their clarity. Unlike many other design approaches, the formal verification requires very clearly defined goals and approaches. In a safety critical system, ambiguity can be extremely dangerous, and one of the primary benefits of the formal approach is the elimination of ambiguity.


Disadvantage:

Bowen points out that formal methods are generally suspected by the professional engineering community, and that preliminary case studies and dissertation tendencies advocating formal methods seem to favor his dissertation. [Bowen93]. There are several reasons why formal methods are not used so often, most of them due to the exaggeration of supporters of formal methods.

Cost- Due to the strict relationship, formal methods are always costlier than traditional approaches to engineering. However, it is arguable how expensive formal verification is, as software cost estimates are more art than science. Formal methods generally have higher initial costs and consume less as the project progresses. This is the opposite of the normal software development cost model.

Computation Model Limitations- This is not a universal problem, but most formal methods introduce some form of computation model and usually do the operations allowed to make the notation elegant and proof of the system. Limit. Unfortunately, from a developer's point of view, these design limitations are usually considered intolerable.

Usability- Traditionally, formal methods have been judged on the basis of their abundance of descriptive models. That is, "good" formal methods describe different systems, and "bad" formal methods have limited ability to describe them. From a theoretical point of view, a comprehensive formal explanation is appealing, but the goal has always been to develop an incredibly complex and subtle explanation language that takes advantage of the difficulties of natural language. Fully formal method case studies often recognize the need for a more comprehensive approach.


The Lightweight approach:

In recent years, the focus has been on formal specification flaws and several alternative approaches have emerged. The traditional view of formal methods as a comprehensive and highly abstracted scheme has led to formal methods being inclusive, very rigorous and very expensive. Although attractive in theory, formal methods have generally been ignored by engineers in this area. A lightweight approach to formal design recognizes that formal methods are not a panacea. There are areas where formal methods are useful and areas where formal specifications are not. Lightweight designs use formal methods in specific locations and can be used in different subsystems. Ideally, take advantage of each method. In such systems, Petri nets can be used to describe communication protocols and LARCH systems can be used to model data storage. For the rest of the system, the formal specification can be omitted altogether. For example, you can use the rapid prototyping system and customer interviews to improve the user interface.


Available tools, techniques, and metrics

Larch: Unlike most formal systems, LARCH provides two levels of specification. A general high-level modeling language, and a collection of implementation dialects designed to work with specific programming languages.

SML: Standard Meta-Language is a strongly typed functional programming language originally designed for exploring ideas in type theory. SML has become the formal methods workhorse because of its strong typing and provability features.

HOL: HOL, short for Higher Order Logic, is an automated theorem proving system. As with most automated theorem proving systems, HOL is a computer-aided proof tool: it proves simple theorems and assists in proving more complicated statements, but is still dependent on interaction with a trained operator. HOL has been extensively used for hardware verification, the VIPER chip being a good example.

Petri Nets: Petri Nets are a good example of a very 'light' formal specification. Originally designed for modeling communications, Petri Nets are a graphically simple model for asynchronous processes.

Monday, May 23, 2022

Computer Security models

In particular, the security model defines the relationship between important security aspects and operating system performance. The computer security model is a scheme for establishing and enforcing security policies. The security model may be based on a formal access right model, computational model, distributed computing model, or it may have no specific rationale. Here are some security models.



Bell-LaPadula Model:


The BellLaPadula model was originally developed by the US Department of Defense (DoD). This model is the first mathematical model of a layered security policy that explains the concept of secure states and forced access methods. This ensures that data flows only in a way that is designed to be confidential without interrupting system policies.


The BellLaPadula has several rules and properties defined below.


Simple security features: "Do not read safely". A subject with a specific clearance level that cannot read higher classification level objects. For example, a subject with secret clearance cannot be reached by a top secret object.


Security Asset: "Don't Write"; This is a higher release level topic and cannot be described at a lower classification level. For example, a subject that subscribes to a higher-class secret system cannot forward email to the secret system.


Strong Quiet Characteristics: The security label does not change while the system is functioning.


Weak hibernate property: Security tags are not modified to conflict with well-defined security properties.



Biba Model:

The Biba model is a bit like BLP, but it doesn't focus on confidentiality. Consistency is the main focus of the Biba model and is often used for consistency where confidentiality is more important. It's easy to think of reversing the BLP implementation. Confidentiality is a major concern of many governments, but most companies want to ensure that data security integrity is maintained at the highest level. Biba is the pattern of choice when guaranteeing integrity is important. The two main rules of the Biba model are the simple axiom of completeness and the axiom of completeness.


Simple Integrity Axiom: (No reading) Subjects with a certain clearance level will not be able to read lower classification information. This helps subjects access important data with a lower level of integrity. This prevents malicious information from low integrity levels from working and ensures integrity.


Consistency Axiom: (No Write) Release level subjects cannot write information to higher classifications. This allows subjects to share important information up to a higher level of integrity than change releases. This protects integrity by preventing defective materials from advancing to higher levels of integrity.



Clark Wilson Model:


The Clark-Wilson model deals with two types of objects, one of which is called CDI and UDI. H. Restricted and unrestricted data items. There are also two types of relationships. One is IVP, which means the integrity check procedure, and the other is TP. H. Transaction procedure. The role of the IVP is to ensure that the TP that causes the CDI is functioning properly and has a valid conversion certificate for all TPs. Only TPs approved can control the CDI here. In other words, this integrity model must be properly implemented to protect the integrity of information and ensure properly formatted transactions.



Brewer and Nash Model:


Also known as the Great Wall model, this model is used to avoid conflicts of interest by allowing the following people: B. Consultant, registration with multiple COIs d. H. Conflicting interest categories are prohibited. Changes to access control policies depend on user behavior. This means that the person who accesses the information cannot access the other person's data or the same person's data is not available.



Harrison Ruzzo Ullman Model:


The Harrison Luzzo Ulman model is also considered an addendum to the BLP model. The BellLaPadula model does not have a system for changing permissions or creating and deleting subjects and objects. The Harrison Ruzzo Ullman model addresses these issues by approving access assignment structures and verifying compliance with specified policies, thereby preventing unauthorized access. The Harrison Ruzzo Ullman model can be implemented via access control or feature lists.

Tuesday, May 17, 2022

Firewall and packet filters

A firewall is a network security device that monitors and filters incoming and outgoing network traffic based on your organization's pre-determined security policies. Basically, a firewall is basically a barrier between your private internal network and the public Internet.

Packet filtering is a firewall technology used to monitor outgoing and incoming packets and control network access by allowing or stopping packets based on source and destination IP addresses, protocols, and ports.

Firewalls have been the first and most reliable line of defense in network security for over 30 years. Firewalls first appeared in the late 1980s. They were initially thought of as packet filters. These packet filters were nothing more than network settings between computers. The main function of these packet filtering firewalls was to look for packets or bytes sent between different computers. Although firewalls are becoming more sophisticated due to ongoing development, such packet filter firewalls are still used in legacy systems. When technology was introduced, GilShwed of Check Point Technologies introduced the first stateful inspection firewall in 1993. It was named FireWall1. In 2000, Netscreen released a dedicated firewall appliance. With faster internet speeds, lower latency, and higher throughput at lower cost, it became popular and was quickly adopted by businesses.

How a Firewall Protects a Network?

The firewall system analyzes network traffic based on predefined rules. Then filter the traffic to prevent it from coming from untrusted or suspicious sources. Allow only inbound traffic that is configured to accept. Normally, a firewall intercepts network traffic at a computer entry point called a port. Firewalls perform this task by allowing or blocking certain data packets (a unit of communication sent over a digital network) based on predefined security rules. Inbound traffic is only allowed from trusted IP addresses or sources.

Search Aptipedia