Wednesday, September 08, 2021

An Introduction on Quantum Computing

In the 1930s, the key figures like Alan Turing have developed a classical theory of computing. These theories explain the restrictions of machinable algorithms and are still used today. Most of these theories are interesting to observe that modern computers exist as you know during the 1950s. Contemporary computers were quickly developed from valve technology to VLSI integrated circuits. The modern processor design function has reached a very small stage, but they are influenced by the strange rules of quantum mechanics.

These effects indicate the size reduction limitations that were one of the important ways to improve the performance of the processor, but some new computers are these effects on quantum computers, I think it can be used for the benefits of.

Richard Feynman, in principle, LED creating an abstract model of a method that can be used to perform calculations. Next, in 1985, David Deutsch announced a revolutionary theoretical article that describes how to use the quantum computing system to fully model physical processes. He stated that a computer of this type could perform tasks such as the generation of real random numbers that classic computers can not achieve. The most powerful functionality of the quantum computer can be the ability to use the phenomenon of "quantum parallel processing" to perform certain types of calculations in part of the time made by the classic computer.

Turing Machines developed by Alan Turing. In the 1930s are theoretical devices consisting of unlimited length tapes divided into small squares. Each square can maintain the symbol (1 or 0). Or leave it blank. The Read-write device reads these symbols and blank spaces. This will show an instruction to execute a specific program on the machine. Is this a family sound? In the Turring Quantum machine, the difference is that, as well as the reading / writing head, the tape is in a quantum state. This means that the symbol on the tape is 0 or 1 or 0 and 1 superposition. In other words, both symbols are at the same time 0 and 1 (and all points between them). Standard turning machines can only be performed simultaneously, but quantum Tour machines can perform many calculations at a time.

Turning machines, today's computers work operating the bits that exist in one of the two states. To 0 or 1. Quantum computers are limited to two states. They encode information such as quantum bits or jokes that exist in the superposition. The QUBITS represent each controller that operates together to function as a computer memory and a processor and their respective control devices. Quantum computers may contain these multiple states at the same time, so it can be duplicated that today's most powerful supercomputer.

The superposition of this QUBIT provides quantum computers inherent parallelism. According to the physique David Deutsch, this parallelism allows the quantum computer to work simultaneously for millions of calculations while operating a desktop PC. 30 The QUBIT Quantum computer is equal to the processing power of conventional computers that can be performed at 10 teraflops (the step of the floating port per second). Today's typical desktop computers work at a speed measured in Gigaflops (billions of floating movement operations per second). The quantum computer of

also uses another aspect of quantum mechanics known as entanglement. A problem with quantum computer ideas is that if you try to examine the suggestive particles, you can hit them, which allows you to change its value. You will see the qubit in the superposition to determine the value. In that case, the QUBIT assumes one of 0 or 1 value, but both are not (effectively activated on a digital computer of the time series). To create a practical quantum computer, scientists should design how to indirectly measure to maintain system integrity. Quantum physics allows two atoms to apply an external force to two atoms, and the two atoms can take charge of the characteristics of the first atom. Therefore, if it is a person and it is left, it will turn slightly. At the time when interrupted, it will select a rotation or value, and at the same time, the second intertwined atoms will select the opposite turn or value. This allows scientists to know the value of Qubit without really looking at.

Next, we'll look at some recent advancements in the field of quantum computing.

QUBIT CONTROL

Computer scientists use control devices to control microscopic particles like qubits in quantum computers.

• The ion trap uses light or magnetic fields (or a combination of both) to trap ions.

• The optical trap uses light waves to trap and control particles.

• Quantum dots are made of semiconductor materials and are used to contain and manipulate electrons.

• Semiconductor impurities contain electrons by using "unwanted" atoms found in semiconductor materials.

• The superconducting circuit allows electrons to flow with little resistance at low temperatures.

The advantage of quantum computing

It is theoretically shown that the quantum computer can make classical computers that they can perform. However, this does not necessarily mean that the quantum computer has priority of classical computers for any type of function. Using a classic algorithm on a quantum computer, you can also create the classic computer. To show your superiority of the quantum computer, you must use a new algorithm to use the parallelism of the symptoms.

Such an algorithm is not easy to formulate, but once it was discovered. They bring spectacular results. An example of an algorithm of this type is a quantum factor resolution algorithm created by Peter Shor's of AT & T Bell Laboratories. The algorithm works in its main factors for significant factoring problems. This task is a classic challenge to solve. In fact, forming the basis of RSA encryption is probably very difficult to form the most common encryption method of encryption used today. The Shor's algorithm is welcoming the effect of parallel quantum processing, provides the result of the problem of the second decomposition of the factor in a few seconds. In contrast, classic computers can, in some cases, produce more than the age of the universe.

Disadvantages of quantum computing

The technology required to build quantum computers is currently out of range. This is because the basic coherent state of the operation of the quantum computer is destroyed as soon as possible by its environment. The attempt at the battle with this problem has barely not succeeded, but continues hunting for practical solutions.

The meaning of the theory involved in quantum calculus not only creates a faster team, but also scope.

Quantum Communication

Many research groups are working on quantum communication systems. They allow the sender and recipients to accept the code where it is not fulfilled directly. Principles of uncertainty, the world nature of Quantum will prevent the sender and recipients will dislocate if the chill tries to monitor the signal that is transported.

Quantum cryptography

The expected capacity of quantum calculus promises a significant improvement in the world of encryption. Ironically, the same technology also provides global encryption technology for global problems. The meaning of the Shor's factor degradation algorithm in the world of encryption is incredible. The ability to break the RSA coding system can unstable almost all current communication channels.

Compare IaaS, PaaS , SaaS, NaaS, Idaas

Introduction

Cloud computing is used to set up a new connection on the local host or in server. In cloud computing we maintain the data on server and access the data using third party. cloud computing provide database for accessing data without in corruption.

What is PaaS, SaaS,and IaaS
SaaS used to provide application,
PaaS used to provide deploy customer application ,
IaaS used to provide storage capacity infrastructure .

IaaS (Infrastructure as a Service)
IaaS is used to provide infrastructure on the hardware or software in cloud computing. And it also provide local host disk spaces on the server . and maintain data using data block, firewalls, and also balance the data .

Example- Amazon, google computing engine. 
a. Provide base layer. 
b. Deal with virtual machine, storage , server, network and load data.


Characteristics of IaaS
· Resources are distributed service.
· Permits for dynamic scaling.
· Has a dynamic cost utility pricing model.
· includes multiple users on a single hardware.

PaaS (Platform as a Service)
PaaS is provide the cloud computing platform which include operating system, different programming language, error handling compiler, execution environment, and database web server, etc.
Example- Windows Azure, AWS Elastic Beanstalk.

a. It is a top layer of IAAS 
b. Runtime environment , Databases ( mySql, Oracle), Web Servers (tomcat etc)

Characteristics of PaaS
· Used tool to handle billing and and subscription management.
· Related with web services and databases.
· Support development team collaboration using solution of PaSS

SaaS (Software as a Service)

SaaS is provide accessing application software . and its refer on demand s/w . its setup installation of running application . in SaaS have a some client to access the software and just have to pay.

Example- Google Apps, Microsoft Office 365. 
a. it is a top layer of PaaS 
b. Application like email software(yahho gmail, rediffmail etc.) 
c. Sites for social network likes facebook, orkut.

Characteristics of SaaS
· Web access using commercial software.
· Software is managed from a central location.
· Software delivered in a one too many model.

Naas (Network-as-a-service)

Naas is used to proved network model to access the data using internet . and in there we create virtually network and pay using data subscription.
Example- network fire walls security .

Characteristics of NaaS
· Operation benefits for centralized.
· Optimal flexibility in capacity control.
· On-demand network resource usage procedure

Idaas (identity as service )

IdaaS is an authentication infrastructure that managed the identity of web server data and its also used by third party.
Characteristics of IdaaS
· Directory Services.
· Federated Services.
· Identity verification and Profile management. 

 

Iaas

Paas

Saas

Naas

Idaas

Infrastructure

as a Service

Platform as

a Service

Software as

a Service

Network as

a Service

identity as

a Service

Infrastructure

as an asset

License

Purchasing

Software

as an asset

network

as an asset

identity

as an asset

platform

Independent.

Consume

cloud

Infrastructure.

maintain

cloud

components

simple flow

level simulations

single sign-on services

Grid computing

Solution stack

Thin client.

High-level networking

model

 Impact of the global server

Avoid capital

on hardware

and human

resources

Streamlined

version

deployment

Avoid capital on

software and

development

resources

Scalability and multi-tenant isolation

Risk and Event monitoring

 

Short note on Artificial Inteligence

Artificial Intelligence is a branch of computer science concerned with making an intelligent machine behave like a human. The term "A.I." was introduced by John McCarthy in 1956. He was the designer of the language LISP(List Processing). LISP is the high-level programming language. We know about the computer that given a set of rules written in the programming language of the computer, the A.I. systems should obey the rules strictly. So, human scientists can test their theories about human behavior by converting their rules to a computer program and observing if the computer's behavior in executing these programs is like the natural behavior of a human being, or at least that small subset of human behavior they are studying. A computer scientist can look at modeling human behavior as a challenge to their programming abilities. If a person can do something, they can write a computer program that does the same thing. The aim of artificial intelligence is to try to make a computer perform tasks that humans tend to be good at.
 
The seeds of modern A.I. were planted by classical philosophers who attempted to explain the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the digital programmable computer in the 1940s, the machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain. The field of A.I. research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956s. Those who attended the workshop would become the leaders of A.I. research for decades. Many of them predicted that a machine as intelligent as a human being would exist in not more than a generation, and they were provided with millions of dollars to make this vision come true. Eventually, it became evident that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism of James Lighthill and ongoing pressure from Congress, the U.S. and British Governments stopped giving funds to un-directed research into artificial intelligence, and the difficult years that followed would later be known as an "A.I. winter." Seven years later, a visionary initiative by the Japanese government inspired governments and industry to provide A.I. with billions of dollars. Still, by the late 80s, the investors became disillusioned and withdrew funding again. Interest and funding in A.I. boomed in the first decades of the 21st century when machine learning was successfully applied to many problems in academia and industry. As in previous "A.I. summers," some observers predicted the imminent arrival of artificial general intelligence (a machine with intellectual capabilities that exceed the abilities of human beings)
 
In 1956:- The first Dartmouth College summer A.I. conference is organized by John McCarthy, Marvin Minsky, Nathan Rochester of IBM(Deep Blue chess machine / international business machines (doubt)), and Claude Shannon. The name artificial intelligence is used for the first time as the topic of the second Dartmouth Conference, organized by John McCarthy. The first demonstration of the LT (Logic Theorist) was written by Allen Newell, J.C. Shaw, and Herbert A. Simon (Carnegie Institute of Technology) but the present name that is institute Carnegie Mellon University. This is often called the first A.I. program.
 
In 1957:-The general problem solver (GPS) was demonstrated by Newell, Shaw, and Simon.
 
In 1958-1960:- John McCarthy invented the Lisp programming language. Herbert Gelernter and Nathan Rochester described a theorem prove in geometry that exploits a semantic model of the domain in diagrams of typical cases. Teddington Conference on the Mechanization of thought processes was held in the U.K., and among the papers presented were John McCarthy's programs with common sense. 
 
In 1959:- John McCarthy and Marvin Minsky founded the MIT AI Lab. Margaret Masterman and colleagues at the University of Cambridge design semantic nets for machine translation. Ray Solomonoff lays the foundations of a mathematical theory of A.I., introducing universal Bayesian methods for inductive inference and prediction. Man-Computer Symbiosis by J.C.R. Licklider.
 
In 1961-2000:- James Slagle wrote the first symbolic integration program in lisp, which solved calculus problems at the college level. He referred to sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems un-provable by any theorem-proving A.I. deriving all provable theorems from the axioms. Since humans can see the truth of such theorems, machines were deemed inferior. Unimation's industrial robot animate worked on a general motors automobile assembly line. Thomas Evans demonstrated that computers could solve the same analogy problems as are given on I.Q. tests. Leonard and Charles published a pattern recognition program that generates, evaluates, and adjusts its operators, which described one of the first machine learning programs that could acquire and modify features. Danny Bobrow's dissertation project M.A. shows that computers can understand natural language well enough to solve algebra word problems correctly. Bertram Raphael's MIT dissertation on the program demonstrates the power of a logical representation of knowledge for question-answering systems. J. Alan Robinson invented a mechanical proof procedure, the resolution method, which allowed programs to work efficiently with formal logic as a representation language. It was a popular toy at A.I. centers on the ARPANET when a version that simulated the dialogue of a psychotherapist was programmed. Edward Feigenbaum initiated general a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system. In 1967, the first successful knowledge-based program for scientific reasoning, and 1968 the first successful knowledge-based program in mathematics. In 1969 Roger Stanford defined the conceptual dependency model for natural language understanding and the first semantics-driven machine translation program. 1970 Jaime Carbonell developed scholar, an interactive program for computer-assisted instruction based on semantic nets as the representation of knowledge. Bill Woods described augmented transition network (ATN) as a representation for natural language understanding. 1973 the assembly robotics group at the University of Edinburgh builds a Freddy robot capable of using visual perception to locate and assemble models. 1975 The Meta-Dendral learning program reported new results in chemistry (some rules of mass spectrometry), the first scientific discovery by a computer to be published in a peer-review refereed journal. 1978 Herbert A. Simon won the "Nobel" prize in economics for the theory of bounded rationality, one of the milestones of A.I. known as satisfying. That year, The molten program, written by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge could be used to plan gene-cloning experiments. 1979 The Stanford cart, developed by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a room and circumnavigates the Stanford AI Lab. The late 1970 demonstrates the power of the ARPANET for scientific collaboration. 1980s Lisp machines were developed and marketed. First expert system shells and commercial applications. 1980 first national conference of the American Association for artificial intelligence (AAAI) was held at Stanford. 1981 Danny Hillis designs the connection machine, which utilizes parallel computing to bring new power to A.I. and computation in general. 1982 The fifth generation computer systems project, an initiative by Japan's ministry of international trade and industry. 1986 the Ernst Dickmanns at Bundeswehr University of Munich builds the first robot cars, driving up to 55 mph on empty streets. Founders of the firm with the underlying engine developed by Paul Tarvydas. The alacrity system also included a small financial expert system that interpreted financial statements and models. The early 1990s is powerful enough to create a championship-level game-playing program by competing favorably with world-class players by TD-Gammon. 1990s major advances in all areas of A.I., with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics. 1991 DART scheduling application deployed in the first gulf war paid back DARPA's investment of 30 years in A.I. research. 1993 Ian Horswill extended behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds (1 meter/second). 1995 (No Hands Across America), a semi-autonomous car drove coast-to-coast across the united states with computer-controlled steering for 2,797 miles (4,501 km) of the 2,849 miles (4,585 km). The throttle and brakes were controlled by a human driver. In the late 1990s, Web crawlers and other AI-based information extraction programs become essential in the widespread use of the World Wide Web. Demonstration of an intelligent room and emotional agents at MIT's A.I. lab. Initiation of work on the oxygen architecture, which connects mobile and stationary computers in an adaptive network.
 
In 2001-2016:- 2004 NASA's robotic exploration rovers spirit and opportunity autonomously navigate the surface of Mars.2004 DARPA introduces the DARPA grand challenge requiring competitors to produce autonomous vehicles for prize money. 2005 Honda's ASIMO robot, an artificially intelligent humanoid robot, can walk as fast as a human, delivering trays to customers in restaurant settings. 2005 blue brain is born, a project to stimulate the brain in molecular detail. 2009 google builds a self-driving car. 2010 Microsoft launched Kinect for Xbox 360, the first gaming device to track human body movement, using just a 3D camera and infra-red detection, enabling users to play their Xbox 360 wireless. The award-winning machine learning for human motion capture technology for this device was developed by the computer vision group at Microsoft research. 2011 Apple's Siri, google's google now, and Microsoft's Cortana are smartphone apps that use natural language to answer questions, make recommendations, and perform actions. 2013 NEIL, the never-ending image learner, is released at Carnegie Mellon University to compare and analyze relationships between different images constantly. 2015 an open letter to ban the development and use of autonomous weapons signed by Hawking, Musk, Wozniak, and 3,000 researchers in A.I. and robotics.
 
Application of A.I.
 
1. Nature Language Process:- A computer system capable of understanding a message in the natural language would seem to require both the contextual knowledge and the process for making the inferences (from this contextual knowledge and the news) assumed by the message generator. Some progress has been made toward computer systems of this sort for understanding spoken and written fragments of language. Fundamental to the development of such a system are specific A.I. ideas about the structure for representing contextual knowledge and particular techniques for making inferences.
 
2. Expert consulting systems:- A.I. methods have also been employed to develop an automatic consulting system. These systems provide human users with an expert conclusion about specialized subject areas. Automated consulting systems have been built that can diagnose diseases, evaluate potential ore deposits, suggest structures for complex organic chemicals, and even provide advice about how to use another computer system.
 
3. Robotics:- Research on robots or robotics has helped to develop many A.I. ideas. It has led to several techniques for modeling the state of the world and describing the process of charge from one world state to another. It has led to a better understanding of how to generate a plane for action sequences and monitor the execution of these plans. Complex robot control problems have forced us to develop methods for planning at lower levels of abstraction, ignoring details and then planning at lower and lower levels, where details become essential. 
 
4. Automatic Programming:- The task of writing a computer program is related both to theorem proving and to robotics. Much of the primary research is in automatic programming—theorem proving and robot problem-solving overlaps. In a sense, existing complete already do automatic programming. The task is a full source code specification of what a program is to accomplish, and they write an object code program to do it.

Saturday, September 04, 2021

Role of SMAC in evolution of Digital India

Introduction:

Digital technologies which include Social, Mobility, Analytics and Cloud applications have emerged as a catalyst for rapid economic growth and citizen empowerment across the globe. This vision is to empower every citizen of the country of Bharat (India) with access to digital services, knowledge and information. Department of Electronics and Information Technology (DeitY) has taken the collaborative approach towards achieving the three visions and 9 pillars of Digital India. DeitY has launched a digitally enabled platform called “MyGov” (mygov.in) to provide collaborative and participating governance.

 

The three visions digital India programs are

· Digital infrastructure as a utility to every citizen (Vision-1)

· Governance and services on demand (Vision-2)

· Digital empowerment of citizen (Vision-3)

 

There are Nine pillars of digital India program. They are as follows.

· Broadband highways

· Universal access to mobile connectivity

· Public internet access program

· e-Governance- Reforming government through technology

· eKranti- Electronic delivery of services

· Information for all

· Electronic manufacturing

· IT for jobs

· Early harvest programs

 

The Role of Social:

Social media like Facebook, Twitter, LinkedIn allows people to connect to share views, likeness, opinions anywhere and anytime without any delay. These interaction benefits corporate and the government analyze the data and make decisions regarding products and services. For this we require high speed internet so that as many as people from rural and urban areas can connect with each other and access various services online. While accessing any service their digital identity will be identify through Aadhar card. Since English is the official language of India accessing any service people from rural and urban areas will face problems. Therefore digital resources/services will be available in Indian languages so that human-machine interaction takes place without language barriers. This will create multilingual knowledge resources. Therefore social media will be basically used for marketing, internal collaboration and data analysis.

 

The Role of Mobility:

Mobility is the critical part of national e-Governance (NeGP) projects in India currently being implemented under Central and State levels. NeGP has performed various works such as stack holder needs analysis, project planning and measurement, process reforms etc. Perform survey and stockholder need analysis involve need of citizens at rural and urban areas, need of business, need of government employees at state and central level.

Based on the above survey and analysis central and state government perform large scale e-Governance project planning of service delivery through mobile phones, making the mobile phone the central of service delivery. This results participation of people from urban and rural areas in digital and financial space through mobile and banking, seamlessly integrated services, single window access to services, services in real time online and mobile platforms, digital transformed services for improving ease of doing business, making financial transactions online through internet banking, Rupay debit card etc. no cash transaction, leveraging GIS for decision support system, universal digital literacy at individual level etc.

Apart from this different services provided in rural and urban areas as mobile as the central point of delivery of all services given as follows-

· m-health (mobile based health and medicine consultancy)

· m-education (mobile based virtual education classrooms in local languages at all levels)

· m-biometric identity authentication (mobile based identity through Aadhar)

· m-agriculture (mobile based monitoring management, agri-extension advice and sale)

· m-elections (mobile based online voting based authentication)

· m-rural development (mobile based various rural development projects based on mobile)

· m-panchayat (mobile based panchayat services delivered on mobile)

 

The Role of Analytics:

Analytics refers to Big data. Big data means data available in both structured and unstructured form integrated with multiple, diverse, dynamic sources of information. In fact big data is defined as data that exhibit the 4V properties- value, volume, velocity and veracity. Analyzing this huge amount of data to get the pattern and relevant useful information is called analytics. Big data based analytics can be used in many of the campaigns and election results.

Availability of digital information in India is growing very fast. Data available in enterprise, the volume of data available by the government is also increasing. There are government funded initiatives such as data portal India or Aadhar which are promising directions to enable big data applications relevant to India.

 

There are many challenges to handle large set of data such as

· Efficient architecture and infrastructure of data capturing, data analytics, data delivery, data visualization and data management

· Making data driven decisions

· Data analytics from specification of e-Health, e-Education, e-Governance etc. are yet to be identified

· Integrating big data platform (such as Hadoop) into existing data warehouses

· Security and privacy issues of data being shared for analysis or public consumption are also important to address

· Discovering patterns, predictive analytics and other insights from big data is a non-trivial problem and provides lots of opportunities to innovate in the algorithm innovation

 

The Role of Cloud:

According to NIST definition Cloud Computing is a model for enabling, convenient, ubiquitous on-demand network access to a shared resources (e.g., networks, servers, storage, applications, and services) that can be gradually provisioned and launched with minimal management and service provider interaction. Cloud has five essential properties like on demand self-service, broad network access, resource pooling, rapid elasticity and measured service.Apart from this Cloud Computing has three cloud service models (IaaS, PaaS, and SaaS), and four cloud deployment models (private, public, hybrid, and community).

Through digital India initiatives sharable public Cloud will be available through digital lockers or Digilocker. It enables the people to digitally keep their important documents like PAN card, passport, mark sheets and degree certificates. Digital locker also provide secure accessibility to Government issued documents. It uses authentication services provided by Aadhaar.

 

Cloud Computing has several challenges in India. They are outlined as follows:

· Achieve global leadership in India in Cloud Computing uses, services, offerings and innovation.

· Accelerate national adoption in Cloud Computing technologies, driven by local expertise.

· Develop an innovative framework for Cloud Computing initiatives in India.

· Different other aspects like interoperability, privacy and security.

· Create an environment for multi-stack holder partnership and joint progress.

 

There are also six trends of Cloud Computing.

1. Multinational companies are looking for new business growth opportunities using modern information technology solution. The SMAC has created interesting use cases for businesses. Therefore business growth and IT cost reduction is the ultimate goal.

2. Many leading IT industries in different fields are taking the entire business like Amazon- the World’s largest book store, iTune- the World’s largest music company, Facebook- the World’s largest social site etc.

3. Cloud is becoming a major evolution step in IT market. Its adoption basically depends upon client cost, service, multi-tenant technology and multi-shared delivery model.

4. Enterprise boundaries are getting redefined. Cloud based IT solutions integrated with social media and analytics bring higher value. Therefore IT companies need to integrate with their partners, suppliers and other areas of ecosystem.

5. Make in India initiatives will drive innovation relevant to India Cloud market. Since 2010 70% of India software is developed based on Cloud platform. These products can provide globalize solutions.

6. The combined Cloud service market (public and private) was $0.9 billion in 2011 while in 2015 it has increased up-to U.S $4.5 billion accounting to more than 3% of the global market.

 

Conclusion:

SMAC based IT solutions are identified as a multi-billion dollar prospect for the IT sectors in the world. The corporate and governments are increasingly adopting these technologies, as they become more agile with resource sharing within organization and seek more awareness about their customers to serve them better. Global IT market is being flourishing hugely with SMAC strategies fulfilling the needs in a well-organized manner and promising the prospects of the future of Digital India program. Last but not the list it is just the beginning of a digital revolution, and it will create more job opportunities in IT sector in next 5 years in India.

Effects of mobile phones on human body

Introduction

The rapid development of mobile phones in India started around 1980s. At that time it was popularly known as first generation mobile phones that allowed transmitting sound only using analogue technology. Digital transmission i.e. GSM (Global System for Mobile Communication) started around 1990s. This is known as second generation (2G) mobile communication. Apart from voice transmission GSM facilitates internet accessing such as email, fax etc. For both analogue and digital mobile phones, the signal transmitted and received are in the form of waves in radio frequency (RF) and microwave parts of electromagnetic spectrum.

In the year 2000, several reports have reviewed relevant studies and summarized current knowledge about mobile phones and health. The aim of this article is to combine the available epidemiological evidence to learn whether exposure to RF and microwave radiation from mobile phones and their base station can affect health.

Communication Technologies and Radiation

GSM phones transmit around 900 MHz frequency are now-a-days replaced by UMTS phones that transmit around 2.1 GHz = 2.1 billion cycles per second. Health and behavior studies conducted on 3G (third generation) UMTS frequencies likely to be outdated 4G and 5G became widely available.

Radiation is a combination of electric and magnetic energy that travel through space at speed of light. It is also referred to as electromagnetic radiation. Basically radiation is categorized into two basic types.

(i) Ionizing radiation (IR) : This radiation is capable of causing changes in atoms or molecules in the body that can result in tissue damage such as cancer. Example of IR includes X-rays and gamma rays.

(ii) Non-ionizing radiation (NIR): This radiation doesn’t cause any changes in human body, rather it can prompt molecules to vibrate. This can raise the temperature in the body as well as other effects. Example of NIR includes ultraviolet radiation like sunlight, visible light, light bulbs, microwave energy, GSM UMTS transmission and radio frequency energy.

Radiation Effects from Mobile Phone:

The mobile phone system is just like a two-way radio system where one side is individual handset and other side is a base station. The mobile device has a radio receiver and a transmitter and that base station antenna are mounted high off the ground. Mobile phone base station emits relatively constant level of RF radiation. When you make a call the phone uses RF radiation via its antenna to talk to nearby base station. The emission of RF by cellphone at that time depends on three things:

· How long we use the phone

· How close we hold the phone to our body

· How close we are to the base station

According to World Health Organization (WHO) radiations by mobile phones and base stations can break chemical bonds or cause ionization in human body. Federal Communication Commission (FCC) also suggests cell phone users to keep a minimum of 20 cm distance between your mobile phone and your body to significantly reduce radiation effects.

Risks that can Occur:

Evidence so far suggests that mobile phones aren’t harmful still then constant use of cellphones can cause following risks.

1. Generate negative emotions: While two or more persons are talking face to face and in between that if anyone gets a call and get busy talking over phone; this creates negative feelings towards the person who has his or her device visible.

2. Negative effects on stress level: The constant ringing, vibrating alert and reminders can put cellphone user can have negative effects on stress level.

3. Increase risk of illness in your immune system: The constant touching of your cellphone can harbor germs on your handset.

4. Increase risk of chronic pain: While replaying to any message, cellphones require constant use of hands. This can cause pain and inflammation our joints in hand.

5. Increase risk of eye vision problem: Generally cellphone screens are smaller than computer or laptop screens; therefore it needs more stress and strain on your eyes while reading messages.

Precautions to be taken:

Cellphones are the integral part of our day-to-day life. Therefore different scientific reasons are there to treatas precautions while using it so that we can reduce exposes to RF radiation.

· While purchasing a cellphone check out how much radiation your phone emits by looking at its SAR (Specific Absorption Rate) which is a measure of amount of radiation absorbed by your body. SAR is defined as the power absorbed per mass of tissue, measured in watts per kilogram (W/Kg).

· Limit the number of calls you make.

· Restrict the length of your call.

· Use hand-free devices such as wired cellphone devices such as headsets or wireless ones like Bluetooth. Bluetooth and wired headsets are classified as low-power, no licensed radio frequency devices by FCC.

· If you are not using a hands-free device, put the loud speaker on and hold the phone away from your ear.

· Avoid carrying your phone in switch on mode in your pocket, on your belt or anywhere closer to your body since cellphones emit radiation.

· Use alternate side of cellphone while speaking.

· Text message instead of talking.

Friday, September 03, 2021

Types in Research Methodology

Research methods are classified based on different criteria. They are of various kinds, nature of the study, the objective of the study, and the research design. There are case studies and interviews depends on research methodology. In some research studies, two or more methods are clubbed, while in some, very few methods are taken to study.

 

Based on General Category,

1. Quantitative Research

Quantitative means the numbers where data is gathered based on numbers, and a summary is taken based on these numbers. Graphs and statistics help to quantify the results in quantitative research.

2. Qualitative Research

Qualitative means the non- numerical data in the research. When the data cannot be understood in terms of numbers, qualitative research comes to rescue the researcher. Though unreliable as much as quantitative, qualitative research method helps to make a better summary in terms of theory of information.

 

Based on the nature of the research,

3. Descriptive Research

Facts and figures are considerable in descriptive methods and surveys and case studies are done to verify the facts. This helps to figure out and explain with the examples, the facts, and they are not rejected. Many parameters can be used in descriptive research methods to explain the facts.

4. Analytical Research

Analytical research method uses the facts that have been verified, which are already used to make the fundamental basis for the research and critical evaluation of the research material is carried out in this method. Analytical methods make use of quantitative methods, also.

 

Based on the purpose of the study,

5. Applied Research

Applied research method is action and implementation research where only one area of research is considered and the facts and figures are generalized. Variables or parameters are considered constant and forecasting is made so that the methods can be implemented easily in applied research. The technical language is used in this research method and the summary is made on technical facts and figures.

6. Fundamental Research

Fundamental research method is purely basic research done to determine an element or a theory that has never been in the world, until. Several domains and area are connected and the purpose is to determine how conventional things can be modified or something new can be developed. The summary is basically in common understandable language and logical findings are applicable in the research.

 

Based on research design,

7. Exploratory Research

Exploratory research studies are based on the theories and the detailed explanation and it does not give any conclusion for the research topic. The structure is improper and the methods offer a flexible and investigative strategy for the study. The hypothesis is not tested and the result will not be of much useful to the world. The findings will be topic related that helps in determining more in the research.

8. Conclusive Research

The purpose of conclusive Research method is providing an answer to the questions asked in research topic and has a proper design framework in the methodology. A well-designed framework helps in formulating and solving the hypotheses and report the results. The results will be unique and general, and help the outside world. Researchers will have an inner pleasure to resolve the problem issue and to help society.

9. Surveys

Surveys plays an important role in the research methodology. It helps to gather a vast amount of real-time data which is useful in the research process. It is done at a low cost and can be done comparatively faster than any other method. Surveys is conducted in both quantitative and qualitative methods. Quantitative surveys are considered above qualitative surveys as they give numerical outputs or result and the data is real. Surveys are mainly used in the business to know the demand and supply of a product in the market and to forecast the production depending on the results from the survey.

10. Case Studies

Case studies method of research methodology where different cases are studied and the genuine one is selected for the research. Case studies used to form an idea of the research and helps in the fundamental base of the research. Various facts and figures are considered from the case studies that used to make genuine reviews about the research topic. Researchers can make the topic general or can make specific based on the literature reviews from the studies. A general understanding of the research can be made from the case study.

Search Aptipedia