Toward an Ontology for Functional Requirements

Ontology for Functional Requirements
Ontology for Functional Requirements

Want help to write your Essay or Assignments? Click here

Toward an Ontology for Functional Requirements

Introduction

Software development is undoubtedly one of the most daunting tasks in the field of information systems and a key process involved is specifying software requirements aimed at understanding and defining the functionalities required from the software (Lauesen, 2002). This fete is determined by outlining the software requirements that describe both non-functional as well as functional requirements of the software system that subsequently become the basis for the process of developing the software system (Sommerville, 2007).

Accordingly, requirements of software systems that exist play a crucial role in providing insights with regards to the re-usability of software artifacts that are already implemented. In the course of software system development process, typical discussions take place between developers and customers in order to agree on requirements specifying the software system functionality (Wiegers, 2003). However, this research report focuses on ontology for functional requirements meaning emphasis will be on the software functionality.

According to Gómez-Pérez, Fernández-López & Corcho (2004) ontology is an explicit or formal specification or description of a shared conceptualization of objects in terms of their constraints, relationships, properties, and behaviors. Functional ontology with regards to software refers to the desired characteristics of the software as specified by the customer and when they fall within functional requirements (FRs) are considered to be a sequence of actions based on a particular context (Malan & Bredemeyer, 1999).

Al-Ahmad, Magel & Abufardeh (2015) defined functional requirements as the characteristics describing the system behavior through an expression of it as the system’s inputs and output as well as the relationships that ensue between inputs and the output. Such requirements are crucial in the software’s development life-cycle since they act as the basis for cost estimation, work plans, implementations as well as follow-up directives or maintenance (van Lamsweerde, 2009). As a result, the purpose of this research report is to build ontology for functional requirements.

Ontology for Functional Requirements

Want help to write your Essay or Assignments? Click here

Motivation/scope

The purpose of this research report is to build ontology for functional requirements and its motivation lies in specifying the ontology’s functional requirements and systematization by devising prescriptive, efficient and detailed guidelines of the methodology specifying functional requirements of the proposed ontology. The methodological guidelines for the proposed ontology are developed in the context of competency questions (CQs) and motivated by methodologies that already exist for building ontologies as well as available literature and practices (van Lamsweerde, 2009).

The inspiration of this project is further based on the fact that, these methodological guidelines which are already in existence help in capturing knowledge from users of the developed ontologies leading to production of the ontology requirements’ specification document (ORSD), which is subsequently utilized by ontology engineers towards developing ontologies satisfying identified functional ontology requirements.

As a result, the significant motivation is that the methodological guidelines to be developed play a key role in serving as an agreement among domain experts, ontology engineers and users on the functional requirements to be included in the ontology (van Lamsweerde, 2009). 

Furthermore, building a software system that is knowledge intensive was another great motivation of this project where the ORSD would be decisive throughout the process of developing the ontology if an actual application was to be carried out since it facilitates: facilitating (1) searching and reusing of knowledge-aware resources that exist in order to re-engineer them into ontologies; (2) searching and reusing of ontologies that exist, ontology design, patterns, ontology statements, or ontology modules; and (3) ontology verification throughout the process of developing ontologies, among other activities (Roth & Woodsend, 2014).

The context in which the methodological guidelines presented in this research report have been generated based on the NeOn Methodology with regards to the NeOn project guidelines (Gómez-Pérez & Suárez-Figueroa, 2008). The scope of the research report is covered in four main sections: Section 1 is the introduction which discusses the introductory aspects of the research report, including the motivation/scope; statement of the problem and significance for information systems (IS); the problem’s importance; and objectives/goals.

Section 2 includes the background of the research report where guidelines of the methodology for functional requirements’ ontology are specified and discussed. Section 3 presents the discussion and anticipated applications of the proposed ontology for functional requirements. Finally, Section 4 provides the conclusions and future work.

Ontology for Functional Requirements

Want help to write your Essay or Assignments? Click here

Problem statement and significance for IS

Since the invention of computers and information systems, software uses have never reached the levels they are at nowadays because of the myriad of problems they are currently solving (Al-Ahmad et al, 2015). However, today software development processes have focused on tailor-made software targeted to solve a similar problem facing a group of individuals, companies, organizations or institutions.

Hence the most demanding and key process in software development is to specify requirements of the ontology, especially the functional requirements outlining the functionality of the software system (van Lamsweerde, 2009). Failure to carry out this task effectively, which is a serious problem, will definitely lead to development of an inefficient ORSD which will not succinctly specify and describe functions of the software product or system (Grüninger & Fox, 1995).

When functional requirements of ontologies are confusing, costs of developing ontologies are often increased, which makes the process of analyzing ontology requirements the most important phase in the life-cycle of ontology development (Cascini, Fantoni & Montagna, 2013). The process of analyzing ontology requirements focuses on specifying effective functional requirements that satisfy customer needs and their implementation by developers is feasible.

This is undoubtedly a challenging fete to achieve easily and usually poses a major problem to many software developers. However, ontology requirements are often established using competency questions (CQs) identification technique in most of the existing software development methodologies, but the current methodologies’ guidelines for building software do not provide sufficient definitions of ontology’s functional requirements hence causing an additional problem (Staab et al., 2001; Roth & Woodsend, 2014).

Ontology for Functional Requirements

Want help to write your Essay or Assignments? Click here

Furthermore, the challenge or problem posed by the need to sufficiently identify/specify and describe ontology requirements, particularly the functional requirements in the process of software development life cycle is of great significance for information systems (IS). This is attributed to the fact that the methodological guidelines used to develop ontology requirements help in capturing knowledge from users, leading to production of ORSD that are subsequently utilized by ontology engineers in developing ontologies that satisfy the identified requirements (van Lamsweerde, 2009).

As a result, these methodological guidelines for the ontology to be developed play a key role in serving as an agreement among domain experts, ontology engineers and end-users on the functional requirements to be included in the ontology; thus they become of great significance for information systems (IS) field as a whole (van Lamsweerde, 2009).

In addition, when building a software system that is knowledge intensive, the ORSD developed from the respective functional requirements and methodological guidelines facilitates the searching and reusing of knowledge-aware resources that exist in order to re-engineer them into ontologies; searching and reusing of ontologies that exist, ontology design, patterns, ontology statements, or ontology modules; and ontology verification throughout the process of developing ontologies, among other activities (Roth & Woodsend, 2014).

Ontology for Functional Requirements

Want help to write your Essay or Assignments? Click here

Importance of the problem

            The importance of the problem discussed in the previous sub-section is undisputable because the identification of ontology requirements’ specification plays a critical role in the process of software development. This is mainly because it attempts to define and understand the required functionalities from the software system or product based on the identified ontology’s functional requirements (Kotonya & Sommerville, 1998).

As a result, there are several benefits provided by the detailed software requirements document produced including:

(a) establishing the basis on which customers and developers or suppliers agree on the uses and users of the software system or product to be developed,

(b) reducing the effort required to develop the software,

(c) providing the basis on which costs and schedules are estimated, and

(d) offering a baseline to validate and verify the developed software system or product (Ambrósio et al., 2004).

As a result, clearly developed methodological guidelines help IT experts or technicians to build ontology-based applications or software that are used to solve problems we face on daily basis (Wiering, 1996). Thus, based on these efforts software developers currently have precise methodological guidelines that are helpful to them when defining functional requirements of the applications they develop on daily basis (van Lamsweerde, 2009).

Ontology for Functional Requirements

Want help to write your Essay or Assignments? Click here

Objectives

The overall goals of this project are to identify a particular methodology for building ontologies, including the method guidelines for ontology’s functional requirements specification. This includes stating the purpose for building the proposed ontology, the intended users and uses as well as the functional requirements that should be fulfilled by the ontology after its formal implementation through detailed methodological guidelines that efficiently specify functional requirements of the ontology.

Background    

The growth of interest in approaches to build ontologies from scratch has been increasing since 1990s and early year of this century, especially in the reuse of existing ontologies as well as utilization of semiautomatic methods aimed at reducing challenges of knowledge of acquisition in the process of ontology development. However, up to mid-1990s the process was slow because it was art based instead of engineering activity and each team involved pursued their own design criteria, set of principles and the ontology building phases were manual.

In 1997, the process of ontology development (Fernández-López & Gómez-Pérez, 2004) was identified on the METHONTOLOGY methodology framework for construction of ontology. The basis of such proposal was the IEEE standard for the development of software and it outlined all the activities carried out when developing ontologies (IEEE, 1998).  

When developing a software application in context of ontologies, the functional requirements of the ontology should be identified as well as those of the application (Wiegers, 2003). Sommerville (2007) claims that today there are precise methodologies to help ontology-based software application developers in defining application requirements. For instance, in METHONTOLOGY [Gómez-Pérez et al., 2004] the goals of the activity of specifying ontology requirements are identified; however, methods to conduct those activities are not proposed in this methodology.

In other methodologies such as Grüninger and Fox [Grüninger & Fox, 1995], the Unified methodology [Uschold, 1996], and On-To-Knowledge methodology [Staab et al., 2001], identification of the requirements follows aspects for ORSD creation including: (1) the ontology’s purpose, (2) the intended ontology’s users and uses, and (3) the set of ontology requirements to be fulfilled after the formal implementation of the ontology.

Competency questions (CQs) are commonly used in most methodologies existing today in order to establish ontology requirements. However, considering that CQs are questions and answers based on the natural language on which the ontology is built, their responses are essential in determining and evaluating the type of required requirements that are specified.

Ontology for Functional Requirements

Want help to write your Essay or Assignments? Click here

Generally, ontology requirements are expressed in a variety of ways, including story-boards and UML diagrams. Alternatively, expectations of the ontology are most commonly expressed in natural language such as the ability of a user to log-in to his/her account (Mich et al., 2004). However, despite the benefit of natural language being intelligible to both developers and clients, they can lead to ambiguity, vagueness and incompleteness (Fernández-López & Gómez-Pérez, 2004). Roth & Woodsend, 2014) argued that although the use of formal languages as an alternative can eliminate some of these challenges, customers are not often able to understand the requirements when they are highly formalized.

Al-Ahmad et al, (2015) proposed that in order to decrease the requirements’ inconsistency and ambiguity caused by the use of the informal natural language approaches that capture both semantic and syntactic features of requirements can be used.

Guidelines of NeOn Methodology for Ontology Functional Requirements

Methodological guidelines in the context of the NeOn Methodology [Gómez-Pérez & Suárez-Figueroa, 2008] are used in this research report to discuss the ontology functional requirements. The creation of the methodological guidelines presented in this research report was done based on the NeOn Methodology, particularly relying on previous studies conducted to revise the status of ontology development.

Next, in Fig. 1 presented below the methodological guidelines for the specification of ontology functional requirements are outlined in prescriptive as well as detailed manner, highlighting main tasks that are carried out in addition, to the involved inputs and the output as well as the responsible actors.   

Want help to write your Essay or Assignments? Click here

Task 1:  Identification of purpose, scope as well as ontology implementation language: This task determines ontology’s main goal, its feasible granularity and coverage as well as implementation language. The team developing the ontology achieves this by conducting interviews with domain experts and end-users to identify their needs in order for developers the ontology to make a decision on the most appropriate language to be formally used implement the ontology.  

Task 2: Identification of intended end-users: This task is aimed at establishing the ontology’s intended end-users who will be mainly using the ontology to be developed. The team developing the ontology achieves this by conducting interviews with domain experts and end-users where a set of the identified needs of the ontology are taken as inputs; whereas a list of the ontology’s intended users is taken as the output.

Task 3: Identification of intended uses: Scenarios linked to the targeted ontology-based application are the main motivation of the ontology development; hence, this task is aimed at obtaining the intended use scenarios and uses of the ontology. The team developing the ontology achieves this by conducting interviews with domain experts and end-users where a set of identified needs of the ontology are taken as inputs; whereas a list of the ontology’s intended uses based on scenarios is taken as the output.

The inputs should outline the ontology uses within the intended ontology-based application in order to obtain an overview of the ontology’s functional requirements, while the output should describe a set of general functional requirements the ontology should fulfill upon its formal implementation. 

Task 4: Identification of ontology’s functional requirements: This task is aimed at acquiring a set of ontology’s functional requirements that should be fulfilled by the ontology because they are considered essential content requirements referring to specific knowledge which is the represented by the proposed ontology. The team developing the ontology achieves this by conducting interviews with domain experts and end-users where a set of identified needs of the ontology are taken as inputs; whereas ontology’s initial functional requirements are taken as the output.

Writing of CQs in natural language is used as the main technique to identify functional requirements in addition to other techniques such as Excel and mind map tools. Using wiki tools including Cicero 12 is also appropriate when people involved are geographically distributed.

Task 5: Grouping of ontology’s functional requirements: This task is aimed at grouping CQs identified in task 4 of this methodology into various categories. A hybrid approach should be used by domain experts, intended ontology users, and the team developing the ontology in classifying the list of CQs by not only combining categories that are established in advance including date, time, units of measurement, languages, locations, currencies, etc., in addition, to creating categories for words found within the list of CQs that have the highest frequencies of appearance.

Card sorting technique is used in manual grouping, while natural language clustering to extract information technique is used in automated grouping. In addition, graphic display of CQs in groups is done using mind map, whereas Cicero is used in collaborative grouping.

Want help to write your Essay or Assignments? Click here

Task 6: Validating the set of functional requirements: This task’s goal is to identify missing functional ontology requirements, possible conflicts between functional ontology requirements, as well as contradictions between them. The task is executed by domain experts and end-users taking a set of ontology’s functional requirements identified in task 4 of this process as inputs aimed to determine the validity of each element. Confirmation the validity of a set of ontology’s functional requirements is taken as the task’s output.

Task 7: Prioritizing ontology’s functional requirements: The task is aimed at assigning various levels of priority to the identified ontology’s functional requirements with regards to various groups of CQs, and to various CQs in every group obtained in task 5. The task is executed by ontology’s domain experts, intended end-users, and the team developing the ontology taking ontology’s functional requirements identified in task 4 of the process and groups of CQs obtained in task 5 of the process as inputs.

A set of obtained priorities inherent in every functional requirement as well as to each CQ in a group and to each group of CQs is taken as the task output.

Task 8: Extracting terminologies and their frequencies: This task is aimed at extracting a pre-glossary of terms within the obtained CQs as well as answers provided. The extracted pre-glossary of terms are divided into three parts that are distinct from each other as follows: terms that are extracted from CQs, terms that are extracted from answers provided to CQs, as well as terms whose identification is attributed to named entities, which are objects.

There will be future use of extracted terms showing higher frequencies of appearance in knowledge-aware resources searching when they have reuse potential in later processes of ontology development. This task should be carried out by the team developing the ontology, taking the obtained CQs and the provided answers as inputs utilizing techniques of terminology-extraction and tools that support them.

Want help to write your Essay or Assignments? Click here

Discussion and Anticipated Applications

In the process of developing ontology-based software application, there must be identification of the ontology requirements as well as those of the application itself. The experience gained in this project indicated that, more essential than mere software’s functional requirements capturing was precise as well as effective identification of ontology’s inherent knowledge.

As a result, ontology-based software application developers currently have specific methodological guidelines that are helpful to them in defining functional requirements of the software applications they intend to develop. This implies that, the NeOn methodology guidelines used to specify functional ontology requirements’ specification of the proposed ontology presented here, have been utilized to the NeOn ontologies as well as developing ontologies educational as well as research projects whose developers’ feedback on ontologies is interesting in each case (Gómez-Pérez & Suárez-Figueroa, 2008).

Furthermore, it is worth to mention that the methodological guidelines developed in this research project based on NeOn Methodology and the subsequent ORSD facilitated the process of identifying the functional requirements of an ontology in a variety of ways including:

(1) allowing representation of specific knowledge in developed ontologies to be identified,

(2) facilitating knowledge resources that are already existing to be reused by focusing the search of the resource in representation of specific knowledge in developed ontologies, as well as

(3) permitting developed ontologies to be verified with regards to functional requirements that ontologies should satisfy (Gómez-Pérez & Suárez-Figueroa, 2008).

As a result, the produced detailed software requirements document was proved to be good guideline because it fulfilled several aspects including: (a) establishing the basis on which customers and developers or suppliers agree on the uses and users of the software system or product to be developed, (b) reducing the effort required to develop the software, (c) providing the basis on which costs and schedules are estimated, and (d) offering a baseline to validate and verify the developed software system or product (Ambrósio et al., 2004).

Want help to write your Essay or Assignments? Click here

Developed methodological guidelines on the basis of NeOn Methodology succinctly outline the ontology functional requirements leading to the development of ORSD which allows (a) a search that is more direct for existing knowledge resources that are necessary to be reused in the process of developing the ontology, and (b) the ontology content evaluation (Wiegers, 2003). Thus, the methodological guidelines of NeOn Methodology can have applications in various projects.

For instance, developed guidelines can be applied in e-Employment whose goals are developing an interoperable and knowledge intensive architecture based on ontologies of public e-Employment services (PES), and enabling market-places that are federated for mediation of employment agencies through an interoperation which is based on peer-to-peer network (Gómez-Pérez & Suárez-Figueroa, 2008). These methodological guidelines of NeOn Methodology can also be used in e-Procurement as well as pharmaceutical companies.

For instance, in e-Procurement application they can be used in solving the problem caused by the lack of interoperability between those emit and receive invoices; whereas in pharmaceutical companies application, these guidelines can be used in helping to systematize the creation, maintenance and storage of updated information that is drug-related as well as allowing new drug resources to be easily integrated (Sommerville, 2007).   

Want help to write your Essay or Assignments? Click here

Conclusion and Future Work

In conclusion, it is important to state that identification of functional requirements is one of the crucial activities in the process of developing ontologies. In this research report, the specification of ontology’s functional requirements has been systematized in the proposed prescriptive as well as detailed methodological guidelines developed in the context of NeOn Methodology in order to specify ontology’s functional requirements.

The developed methodological guidelines can act as a baseline for creating a particular ORSD which is critical to speed up the process of developing the ontology. Terms as well as their frequency of occurrence from the ORSD’s pre-glossary can be used to search and select consensual and knowledge-aware resources that are existing, which after the process of re-engineering when essential, thereby allowing ontologies to be built faster, cheaply, and with higher quality.

The developed methodology guidelines based on NeOn Methodology presented in this research report can be extended further to develop a tool capable of automatically generating all possible combinations for relations and concepts in functional requirements, as well as automating the process of constructing detection rules, and also in developing ORSD.

Despite the fact that, ontology-based software application developers currently have specific methodological guidelines that are helpful to them in defining functional requirements of the software applications they intend to develop, these guidelines are not sufficient to succinctly define functional requirements of ontologies and more work is envisaged in the near future to address this challenge.      

Want help to write your Essay or Assignments? Click here

References   

B. Al-Ahmad, K. Magel, and S. Abufardeh, “A Domain Ontology Based Approach to Identify Effect Types of Security Requirements upon Functional Requirements,” International Journal of Knowledge Engineering, vol. 1, no. 1, pp. 24-29, 2015.

A. P. Ambrósio, D. C. de Santos, F. N. de Lucena, and J. C. de Silva, “Software Engineering Documentation: An Ontology-Based Approach,” Proc. Web Media and LA-Web Joint Conf., 10th Brazilian Symp. Multimedia and Web Second Latin Am. Web Congress, pp. 38-40, 2004.

G. Cascini, G. Fantoni, and F. Montagna, “Situating needs and requirements in the FBS framework,” Design Studies, vol. 34, no. 5, pp. 636–662, 2013.

B. Chandrasekaran, and J. R. Josephson, “Function in device representation,” Engineering with Computers, vol. 16, no. 3-4, pp. 162–177, 2000.

A. Davis, “Software Requirements: Objects, Functions and States,” Upper Saddle River, NJ: Prentice Hall, 1993.

A. Gómez-Pérez, M. Fernández-López, and O. Corcho, “Ontological Engineering: With examples from the areas of Knowledge Management, e-Commerce and the Semantic Web,” London, UK: Springer Verlag London Limited, 2004.

M. Grüninger, and M. Fox, “Methodology for the design and evaluation of ontologies,” In Skuce, D. (ed) IJCAI95 Workshop on Basic Ontological Issues in Knowledge Sharing, pp. 6.1–6.10, 1995.

M. Fernández-López, and A. Gómez-Pérez, “Searching for a Time Ontology for Semantic Web Applications,” Formal Ontology in Information Systems, Turín, Italy, 2004.

A. Gómez-Pérez, and M. C. Suárez-Figueroa, “NeOn Methodology: Scenarios for Building Networks of Ontologies,” 16th International Conference on Knowledge Engineering and Knowledge Management Knowledge Patterns (EKAW 2008). Conference Poster, Turín, Italy, 2008.

T. J. Howard, S. J. Culley, and E. Dekoninck, “Describing the creative design process by the integration of engineering design and cognitive psychology literature,” Design Studies, vol. 29, no. 2, pp. 160–180, 2008.

IEEE, “Recommended Practice for Software Requirements Specifications,” IEEE Standards, p. 830, 1998.

G. Kotonya, and I. Sommerville, “Requirements Engineering: Processes and Techniques,” New York, NY: John Wiley & Sons, 1998.

S. Lauesen, “Software Requirements: Styles and Techniques,” London, UK: Pearson Education Limited, 2002.

J. Lee, and N. L. Xue, “Analyzing user requirements by use cases: A goal-driven approach,” IEEE Software, vol. 16, no. 4, pp. 92-101, 1999.

R. Malan, and D. Bredemeyer, “Functional requirements and use cases,” [Online]. Retrieved from: http://www.bredemeyer.com/pdf_files/functreq.pdf, 1999.

L. Mich, F. Mariangela, and N. I. Pierluigi, “Market research for requirements analysis using linguistic tools,” Requirements Engineering, vol. 9, no. 1, pp. 40–56, 2004.

M. Roth, and K. Woodsend, “Composition of word representations improves semantic role labeling,” Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, pp. 407–413, 2014.

I. Sommerville, “Software Engineering,” (8th ed.). New York, NY: International Computer Science Series, 2007.

S. Staab, P. Hans, R. Studer, and Y. Sure, “Knowledge Processes and Ontologies,” IEEE Intelligent Systems, vol. 16, no. 1, pp. 26–34, 2001.

M. C. Suárez-Figueroa, G. Aguado de Cea, C. Buil, K. Dellschaft, M. Fernández-López, and M. Uschold, “Building Ontologies: Towards A Unified Methodology,” In: Watson I (ed.) 16th Annual Conference of the British Computer Society Specialist Group on Expert Systems. Cambridge, United Kingdom, 1996.

M. C. Suárez-Figueroa, M. Fernández-López, A. Gómez-Pérez, K. Dellschaft, H. Lewen, and M. Dzbor, “NeOn D5.3.2. Revision and Extension of the NeOn Development Process and Ontology Life Cycle, NeOn project,” Retrieved from: http://www.neon-project.org, 2008.

M. Uschold, “Building Ontologies: Towards A Unified Methodology,” In: Watson I (ed.) 16th Annual Conference of the British Computer Society Specialist Group on Expert Systems. Cambridge, United Kingdom, 1996.

A. van Lamsweerde, “Requirements Engineering: From System Goals to UML Models to Software Specifications,” New York, NY: John Wiley & Sons, 2009.

E. Wiegers, “Software Requirements 2: Practical techniques for gathering and managing requirements throughout the product development cycle,” (2nd ed.). Redmond: Microsoft Press, 2003.

R. Wiering, “Requirements Engineering: Frameworks for Understanding,” New York, NY: John Wiley & Sons, 1996.

Want help to write your Essay or Assignments? Click here

Computer Hardware Price and Feature Comparison

Computer Hardware Price and Feature Comparison
Computer Hardware Price and Feature Comparison

Want help to write your Essay or Assignments? Click here

Computer Hardware Price and Feature Comparison

Price and Feature Comparison Table

The table compares different computer hardware prices on different sites and different makes.

VendorBrandDesktopMonitorPrintersMouseOperating SystemOffice SuiteTotal (cost)
AmazonDellDell OptiPlex 3020 Desktop Computer – Intel Core i5 i5-4590 3.30 GHz $370.99Dell UltraSharp U2515H monitor, 25 inch, 2560×144 maximum resolutions; $324.51Brother MFC-L8850CDW. Laser Printer, USB Network (RJ25), $494.50USB
Dell mouse $50
Windows 10 Pro Pack $99Microsoft Office 10 $79$ 14, 180
SmartPriceDellHP Envy 750-177c Desktop | Intel Core i7 | Windows 10 | HP USB Keyboard Rated 2.1, 16GB RAM, 2.0TB Hard Drive $899.99Samsung U28E590, 28 inches, 3840×2160 maximum resolutions, $670.58HP Officejet Pro 8630. Inkjet printer, USB Network (RJ45), $280.90USB Dell Mouse $100Windows 10 Pro Pack $199Microsoft Office 10 $100$22, 500
OutletPCDellDell XPS 8700 Desktop | Intel Core i7 | 4GB Graphics. Windows 10, $749.99Samsung S24D300H, 24 inches, 1920×108 maximum resolutions, $204.93Epson Expression Premium XP-720, Inkjet printer, USB, Picture Bridge; $151.54USB Dell Mouse $89Windows 10 Pro Pack $150Microsoft Office $15014,950.46

When it comes to the desktop, Amazon is the best dealer in computing hardware because the benefits seem compelling. To start with, Amazon’s desktops are;

Want help to write your Essay or Assignments? Click here

Centralized and simplified management

In Amazon, software updates are disseminated in a centralized way, this is because IT does not need to control individual implementation of every computer. In addition, implementation can take place from a central management console. Updates not only occur on desktops but also mobile applications. Centralized server infrastructure offers the benefit of simple backup functionalities.

With efficient infrastructure as well as bandwidth, this technique will reduce network traffic that can occur from different desktops. Amazon also allows IT to offer a significant level of security and compliance. Servers can easily be locked and protected in a manageable way, will less threat to desktop susceptibility. For instance, the administrator can ensure that security control is centralized while reducing malware footprints, in the case of attacks. Desktops may be re-commissioned when risks emerge.

Low cost and computer hardware

On harnessing of server power not seen to users, Amazon provides the advantage of efficient utilization of a central computing ability. This minimizes the costs of buying new hardware and related software, license, and support expenses. When it comes to the total cost of desktop, monitor, printers, mouse, operating system and office suite, Amazon offers a fair deal in comparison to other two vendors namely Smart Price and OutletPc.

In term of performance, an i7 processor is more powerful than an i5, but with a memory upgrade an i5 can just perform as excellent as an i7.  Again, the speeds for the Dell system are so good at 3.3 GHZ. With a 2.0 Terabyte, the computer can store millions of files. The laser printer would be the better option in comparison on inkjet printers. 

                Want help to write your Essay or Assignments? Click here                                                                           

Moreover, the printer is not just compatible with the network, but it has a printing resolution of 800. It is also able to print 72 A4 size papers in one minute. When it comes to the interface, Amazon has a better offer as opposed to other dealers. It would be imprudent to acquire an operating system at $150 or $199 when you can buy the same at $99.

The minimum requirements for a monitor regarding dimension is 18 inches, at Amazon, one can get Dell UltraSharp cost-effectively that other two vendors.  In the end, it will approximately $14, 180 to buy brand new desktops from Amazon which is far cheaper than buying from SmartPrice at $22,500 and OutletPc at $14, 950.46 respectively

Enhanced mobility and access

Again, Amazon desktops offer users remote accessibility using various computing tools. This is extremely accommodating characteristics for remote as well as mobile users who lack a fixed working region. With some Amazon applications, the active desktop condition may be safeguarded, allowing users to begin where they left.

References

http://www.amazon.com/

http://www.outletpc.com/

http://www.mysmartprice.com/computer/pricelist/computer-price-list-in-india.html

Want help to write your Essay or Assignments? Click here

Enterprise-Level Networking Essay

Enterprise-Level Networking
Enterprise-Level Networking

Enterprise-Level Networking

Want help to write your Essay or Assignments? Click here

Introduction

The paper sets out to analyze the website structure of http://er.eEducause.edu. Apart from stating the purpose, it will also examine the pros and cons that come with the structure and layout in general. Subsequently, the paper will discuss important lessons learned from the article on Enterprise-Level Networking and whether or not the source is appropriate for research.

A summary description of the website’s structure, purpose, pros and cons

The usability and design of http://er.eEducause.edu are rather complex. As such, it calls for searching several pages and hyperlinks to get that exact data one intends to use.  A site with the right usability should be easy to use without necessary examining some pages or hyperlinks to get what they are searching for (Clark, 2002). Again, search engine optimization is the basis for any successful website.

However, Educause.edu has not been optimized appropriately; this is a deterrent factor when it comes to search engine ranking. On the other hand, the website has various content formats such as videos, slides, pictures, and text. If well optimized, this may turn the internet site into a resourceful hub.  Loading speed for the website is quite remarkable.

Enterprise-Level Networking

Want help to write your Essay or Assignments? Click here

The quality of videos and pictures are excellent. Nonetheless, modern information websites are mainly generated by way of social media platforms and other associated sites that enhance user interaction. The website has been integrated with youtube.com and twitter; this creates a channel to enhance information sharing. Moreover, the website has a blog that presents a platform for discussion. The capability for users to respond to the articles allows for the interaction with other users that have similar or divergent opinions (Clark, 2002).

Articles are clustered in a category format; this allows for quick peruse. Navigation is done from the right-hand side of the website.  Again, the website has a module that permits members to log into the system. For instance, end users can only make comments by becoming members. Apart from social media and blog interaction, a guest can drop an email using the contact link provided at the footer panel. In the end, the homepage layout is simple and professional.

Lessons Learned

Based on this website, it is clear that during initial adoption, the majority of IT firms concentrate on the abilities of the project. For instance, in the wireless systems, the main capability is providing internet connectivity across the organization without necessarily connecting to a wall outlet. Such capability presents various benefits to different areas; employees can access the system as well as the internet during the meeting, scholars can work from anywhere when they gather data, and students can use their laptops across the campus (Grochow, 2015).  

Enterprise-Level Networking

     Want help to write your Essay or Assignments? Click here

Besides the immediate advantages, IT workers often raise the likelihood of unique benefit, which is wireless networks can minimize or eliminate the importance of wired systems, hence decrease the cost of infrastructure. Furthermore, several new wireless tools would require additional bandwidth; something wired networks will need to comply with. Another aspect to put into consideration is the newness of wireless systems and the relevance to replace the whole infrastructure in the long run.                                                                                                                     

This website is useful because it demonstrates the need for installing new infrastructures while maintaining old ones or adopting new technology and learning that the new editions supersede it. Therefore, whereas one can expect different benefits from new infrastructure, there is often a risk that such benefits may not be attained. A thorough risk assessment will search for the effects of estimated benefits being unsuccessful and other possible risks such as consequences on a network of departments.

Allowing employee participation is a primary strategy to eliminate surprises while showing that risks have to be assessed comprehensively (Grochow, 2015). By and large, with a new infrastructure, risks and benefits are described with no much information to draw upon. This implies the assessment is significantly qualitative, but some aspects can be quantified. Subjective analysis can come from surveys or experiences from other organizations.

References

Grochow, J. M. (2015). IT Infrastructure Projects: A Framework for Analysis. Accessed 23/January/2016 at http://er.educause.edu/articles/2015/1/it-infrastructure-projects-a-framework-for-analysis

Clark, .T. (2002). “IP SANs – A Guide to iSCSI, iFCP, and FCIP Protocols for Storage Area Networks”, Pearson Education, Inc.

Want help to write your Essay or Assignments? Click here

Computer Hardware Essay Paper

Computer Hardware
Computer Hardware

Want help to write your Essay or Assignments? Click here

Computer Hardware

What has been the impact of faster and cheaper computers for personal and company use?

Faster and cheaper computers have altered not just how people communicate but also how companies conduct business globally.  The technological revolution has been advancing at a rate beyond the scale of human comprehension.  In our modern age, cheap and faster computers and computer hardware have made it possible for companies to setup online stores where customer with computers or smart phones can buy goods and services (Laudon & Laudon, 2012).

These computers must be connected to the internet to enable this level of interaction.  Companies like eBay, Yahoo, Apple and Amazon among others are the perfect example of what super-fast technology can do.  With online shops, people can now buy anything online, from houses, to cars to music, through to services.  Institutions of higher learning for instance, can now conduct their lessons online. While technology has also been destructive: destroying traditional businesses and obliterating jobs on the labor market, it has also made it easy for people to work from anywhere provided they have the required infrastructure.                

     Want help to write your Essay or Assignments? Click here         

Computer Hardware                                                                   

Computers have also made a security possible for companies and peoples. With CCTV cameras installed in companies and homes, it has become easier to deter robbery and identity theft among other things.  The mobile phone has been touted as the critical most transformational technology in terms global economic development (Laudon & Laudon, 2012). Poor farmers for instance, can verify not just the price for their perishable crops before harvest but also identify potential buyers. 

This enhances profit margin because they have access to the best information.  The mobile phone and the internet have altered the banking industry. With mobile and online banking, money can flow to people in remote places an aspect that fuels economic growth.  With the mobile phone, people can now make informed decision about medical services beforehand (Laudon & Laudon, 2012). Modern technology has made it possible for hi-tech companies to maximize profits times when brick and mortar companies remain closed. A case in point is iTunes which often sells millions of songs during Christmas day, a moment when brick and mortar stores are closed for the holiday.

What technological advances and benefits are driving the expansion in the use of personal computers?

There are a number of technological advancements and benefits driving the expansion in use of personal computers. Some of these developments include, increased speed of computer processors for the past 15 years, for instance, in 1998 IBM introduced experimental chips, which operate at about 1b cycles/second. Such chips are permitting higher processing ability of personal computers, and significant capacity to make non-personal computers with circuit applications.

Again, increased network speed in the recent development presents a platform of reduced transmission cost, enhanced bandwidth accessibility and greater ability of transmitting high-band width devices like video. The development of digital tools, which leads to the advances of other telecommunication technologies (Gallaugher, 2012). Moreover, advancements of wide area networks (WANs) and local area networks (LANs) in addition to routing and bridging abilities (Laudon & Laudon, 2012).

Computer Hardware

Want help to write your Essay or Assignments? Click here

The continuous development of routing abilities and similar protocols are increasing convergence of data and mobility. The convergence of data and mobility is increasing the internet capabilities and uses.                                                                                

Development of data and internet technology: Recent technologies like ATM and frame relay have the ability to reduce per-unit price while permitting economic access to many users. Furthermore, internet devices like video streaming, e-commerce ability, sophisticated browsers are contributing to not   only significant but also transforming the importance of the internet. Such varied uses strengthen technological advances by increasing the availability funding new ventures, thus provide increasing demands of users. 

The discovery of mobile applications that involves switching as well as transport abilities are enhancing internet connectivity at any place and time. For instance, the mobility of internet tools allow users to communicate in anyplace around the world. Also, in collaboration with Low Earth Orbit (LEO) devices prevent users from developing costly fixed-line networks (Gallaugher, 2012). The introduction of worldwide end-user-services has contributed to considerable interoperability from users, providers and global system integrators.

Computer Hardware

Want help to write your Essay or Assignments? Click here

What are the limitations of faster and cheaper computers?

Traditional integrated circuits (ICs) are processed in high tech companies whose solemn obligation is to produce ICs.  What makes those gadgets cheaper is the specialization in fabrication.  Pundits allude that the price of one transistor is equivalent to the price of single character in a newspaper.  The most powerful and efficient computer systems are those powered by miniature ICs. However, there is no technology and specialty to produce optical PC systems that are compatible to contemporary IC firms.                                                                    

Current modern IC processors are developed in what is known as the very-large-scale-integration (VLSI) or ultra-large-scale-integration (ULSI) (Simon and Cavette, 1996). A square millimeter of a computer circuit for instance, has millions of transistors in a given square millimeter.  On the contrary, optical components can be developed small and compact.  The current technology does not support the development of micro-optic integrated circuits for assembling a CPU or motherboard.  New developments will be needed in the future.   

            Want help to write your Essay or Assignments? Click here               

Currently, traditional CPUs and computer parts are processes with extreme precision and in huge charges by way of composite processes.  A change from the current assembly approach when it comes to the size of the chip platform to another size can generate problems.  Tiny optical components have to be developed rather exactly to function appropriately (Simon and Cavette, 1996).

While this precision is not realized, slight deviations may lead to massive issues diverting light beams. Personal Computers in our modern era have been assembled based on the Von Neumann design. The interface, which is an operating system, is programmed to correspond this platform.  Optical PC systems use a completely different architecture concerning the parallelism of the system. These two different architectures have application programs that are incompatible.

References

Laudon, K., & Laudon, J. (2012). Essentials of MIS. (10th Ed.). Learning Track 1: How computer hardware and software works. Retrieved from http://wps.prenhall.com/wps/media/objects/14071/14409392/Learning_Tracks/Ess10_CH            04_LT1_How_Computer_Hardware_and_Software_Work.pdf

Laudon, K., & Laudon, J. (2012). Essentials of MIS. (10th Ed.). Learning Track 6: Technology drivers of IT infrastructure evolution. Retrieved from http://wps.prenhall.com/wps/media/objects/14071/14409392/Learning_Tracks/Ess10_CH            04_LT6_Technology_Drivers_of_IT_Infrastructure_Evolution.pdf

Gallaugher, J. (2012). Information Systems: A Harnessing Guide to Information Technology. FlatWorld Knowledge. Gallaugher Chapter 5 – E-textbook

Simon, Joel; Cavette, Chris.  (1996) “Integrated Circuit.” How Products Are Made. Retrieved January 20, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-2896600062.html

Want help to write your Essay or Assignments? Click here

Infrastructure as a Service: Case Study

Infrastructure as a Service
Infrastructure as a Service

Want help to write your Essay or Assignments? Click here

Infrastructure as a Service: Case Study

With the market evolution and growing demand for service-based cloud computing infrastructure, there exists a need for reputable cloud infrastructure service providers to improvise the traditional platforms. Though there are other services provided by the magic quadrant, the paper only evaluates cloud compute infrastructure as a service provider. Cloud computing is a service that offers flexible information technology components through the use of the internet.

Cloud computing infrastructure has evolved from being offered as a physical component to a service and still ably competes with data centers and infrastructure based IT initiatives. Other elements of the service-based infrastructural market are cloud printing and cloud storage. However, cloud computing consists the largest market of the cloud Compute Infrastructure as a service (IaaS). The paper provides an evaluation of cloud computing IaaS with the vendors that are known to offer the service.

As a major component of the magic quadrant, cloud computing infrastructure as a service is an automatic and standardized component that allows computer resources to be transferred to customers by service providers on demand. The computer resources can be shared by different tenants or by a single tenant and hosted by the service provider or on customer’s premises. The service infrastructure offers user-interfaces directly to the client.

Want help to write your Essay or Assignments? Click here

Cloud infrastructure can be in the form of a service or a technology platform. Cloud infrastructure as a service is advantageous to the technology platform in that it offers direct services to the customer through self-service. However, there are capabilities that the technology platform offers that the service infrastructure is unable to perform on its own. Cloud computing IaaS has to use cloud-enabled system infrastructure to offer activities such as outsourcing and data-enabled hosting. Still, on its own, cloud computing IaaS is capable of providing a variety of offerings to the customer.

Gartner clients have a dire need to control IT operations. The evaluation covers the needs of the clients ranging from enterprises, retail and technology firms. The quadrant talks about the development, analysis, and production of the cloud computing IaaS internally and externally. The service hosts diverse workloads for a range of design application. Through the magic quadrant, an emphasis is given to standardized self-service and automation.

Magic Quadrants serve the different needs of customers. Customers are more interested in self-service cloud computing infrastructure. To make the service more reliable, it can still be complemented by negligible amounts of dedicated servers.

The magic quadrants offer customized services to organizations that need the service or that want to supplement their traditional hosting platform. Magic Quadrants for Managed Hosting are cloud computing service providers based in North America, Asia, and Europe. The quadrants also provide custom-made cloud computing services for outsourcing of data and utility offerings.

Want help to write your Essay or Assignments? Click here

The providers of IaaS are known for offering exceptionally high-quality services that have a high-performance rate. Also, the providers are always available for customer’s inquiries and support. The magic quadrant specifies unique providers that were evaluated. The providers are profiled about their strengths and weaknesses.

Characteristics of Magic Quadrant vendors include:

1.    Ownership of private and public cloud services. The customers are placed on standard infrastructure and cloud-enabled tools.

2.    The providers lay emphasis on hybrid IT elements but with a view on security and self-service control. Though some of the providers target start-ups, they normally lack the capabilities needed by big organizations. It, therefore, becomes important for the selected providers to provide unique offerings that allow easy access to cloud computing infrastructure.

3.    Most of the vendors combine resilient support with maintenance windows for efficient service provision.

4.    Providers mostly do not oversubscribe Random Access Memory resources, but those that do not assure of allocations of resources are identified and noted. However, not all providers have the same storage capability, and it is only those in the quadrant that offer alternatives for storage purposes.

5.    Most of the vendors possess extra SLAs to provide extensive network services, customer services and all of the customer’s inquiries.

6.    Customers define the scope of the services offered hence the infrastructural service is not automatic. For that reason, some providers specialize in offering disaster recovery in case customers want to re-instate the services. Vendors support virtual networking that is secure with the inclusion of firewall. The providers have extra security services that they offer to their customers at varying amounts depending on the customer. Self-service allows customers to bring their portals and VM image.

8.    Finally, after evaluation, the vendors were found to be financially stable, offer contracts in English, sign contracts with clients and provide managed services on Iaas cloud computing.

As a student, the analysis has given an insight on evolution of technology and how traditional computing methods are being replaced with modern technology. Globalization of services and customer’s demand for technology that they can have control over has driven the invention of the Magic Quadrant. The paper provides an overview of the gap that technology is creating in the evolving world and the need for virtual technology. Before one goes for cloud computing services, it is important to analyze the providers since some vendors do not have the required tools for provision of the service.

The paper influences the aspiration for invention in virtual technology to meet the modern market demands. The high number of vendors is attributable to the market demand and therefore means that technology innovation is the new market driving force. Career in software development is not to be underrated in the new digital era.

References

Gartner (2014). Magic Quadrant for Cloud Infrastructure as a Service: Case Study.

Want help to write your Essay or Assignments? Click here

Magic Quadrants: Cloud Computing Case Study

Magic Quadrants
Magic Quadrants

Want help to write your Essay or Assignments? Click here

Magic Quadrants: Cloud Computing Case Study

With the market evolution and growing demand for service-based cloud computing infrastructure, there exists a need for reputable cloud infrastructure service providers to improvise the traditional platforms. Though there are other services provided by the magic quadrant, the paper only evaluates cloud compute infrastructure as a service provider. Cloud computing is a service that offers flexible information technology components through the use of the internet.

Cloud computing infrastructure has evolved from being offered as a physical component to a service and still ably competes with data centers and infrastructure based IT initiatives. Other elements of the service-based infrastructural market are cloud printing and cloud storage. However, cloud computing consists the largest market of the cloud Compute Infrastructure as a service (IaaS). The paper provides an evaluation of cloud computing IaaS with the vendors that are known to offer the service.

As a major component of the magic quadrant, cloud computing infrastructure as a service is an automatic and standardized component that allows computer resources to be transferred to customers by service providers on demand. The computer resources can be shared by different tenants or by a single tenant and hosted by the service provider or on customer’s premises. The service infrastructure offers user-interfaces directly to the client.

Want help to write your Essay or Assignments? Click here

Cloud infrastructure can be in the form of a service or a technology platform. Cloud infrastructure as a service is advantageous to the technology platform in that it offers direct services to the customer through self-service. However, there are capabilities that the technology platform offers that the service infrastructure is unable to perform on its own. Cloud computing IaaS has to use cloud-enabled system infrastructure to offer activities such as outsourcing and data-enabled hosting. Still, on its own, cloud computing IaaS is capable of providing a variety of offerings to the customer.

Gartner clients have a dire need to control IT operations. The evaluation covers the needs of the clients ranging from enterprises, retail and technology firms. The quadrant talks about the development, analysis, and production of the cloud computing IaaS internally and externally. The service hosts diverse workloads for a range of design application. Through the magic quadrant, an emphasis is given to standardized self-service and automation.

Magic Quadrants serve the different needs of customers. Customers are more interested in self-service cloud computing infrastructure. To make the service more reliable, it can still be complemented by negligible amounts of dedicated servers.

The magic quadrants offer customized services to organizations that need the service or that want to supplement their traditional hosting platform. Magic Quadrants for Managed Hosting are cloud computing service providers based in North America, Asia, and Europe. The quadrants also provide custom-made cloud computing services for outsourcing of data and utility offerings.

Want help to write your Essay or Assignments? Click here

The providers of IaaS are known for offering exceptionally high-quality services that have a high-performance rate. Also, the providers are always available for customer’s inquiries and support. The magic quadrant specifies unique providers that were evaluated. The providers are profiled about their strengths and weaknesses.

Characteristics of Magic Quadrant vendors include:

1.    Ownership of private and public cloud services. The customers are placed on standard infrastructure and cloud-enabled tools.

2.    The providers lay emphasis on hybrid IT elements but with a view on security and self-service control. Though some of the providers target start-ups, they normally lack the capabilities needed by big organizations. It, therefore, becomes important for the selected providers to provide unique offerings that allow easy access to cloud computing infrastructure.

3.    Most of the vendors combine resilient support with maintenance windows for efficient service provision.

4.    Providers mostly do not oversubscribe Random Access Memory resources, but those that do not assure of allocations of resources are identified and noted. However, not all providers have the same storage capability, and it is only those in the quadrant that offer alternatives for storage purposes.

5.    Most of the vendors possess extra SLAs to provide extensive network services, customer services and all of the customer’s inquiries.

6.    Customers define the scope of the services offered hence the infrastructural service is not automatic. For that reason, some providers specialize in offering disaster recovery in case customers want to re-instate the services. Vendors support virtual networking that is secure with the inclusion of firewall. The providers have extra security services that they offer to their customers at varying amounts depending on the customer. Self-service allows customers to bring their portals and VM image.

8.    Finally, after evaluation, the vendors were found to be financially stable, offer contracts in English, sign contracts with clients and provide managed services on Iaas cloud computing.

As a student, the analysis has given an insight on evolution of technology and how traditional computing methods are being replaced with modern technology. Globalization of services and customer’s demand for technology that they can have control over has driven the invention of the Magic Quadrant. The paper provides an overview of the gap that technology is creating in the evolving world and the need for virtual technology. Before one goes for cloud computing services, it is important to analyze the providers since some vendors do not have the required tools for provision of the service.

The paper influences the aspiration for invention in virtual technology to meet the modern market demands. The high number of vendors is attributable to the market demand and therefore means that technology innovation is the new market driving force. Career in software development is not to be underrated in the new digital era.

References

Gartner (2014). Magic Quadrant for Cloud Infrastructure as a Service: Case Study.

Want help to write your Essay or Assignments? Click here

Technology in Contemporary Society

Technology in Contemporary Society
Technology in Contemporary Society

Want help to write your Essay or Assignments? Click here

The Role of Technology in Contemporary Society

SECTION A

Question a.2

Kevin (2009) lets us know that technology is sometimes selfish as well as generous. This statement by Kevin Kelly is brought about by the fact that the use of technology can bring about various results. According to Kevin (2009), the various results that a technology gives may be positive or negative depending on what the user was aiming to achieve.

The varied results are usually brought about by the fact that technology may decide to act in a specific manner giving results which are modeled around it thus seeming selfish. On the other hand, technology may give us results that favor us thus ending up being generous.

Want help to write your Essay or Assignments? Click here

Question b.1

The work-life balance refers to the process of handling job and life simultaneously with satisfaction. This concept entails the ability of being able to get it right when it comes to priorities with regard to work and life. The concepts of work-life balance advocates for proper organization of work related tasks and life related affairs so that all objectives may be achieved.

In work and life balance, technology has been able to play a major role. To balance work and life, people have been able to use technology to stand in for them where they are required to do something related to work but family is the priority at a given point in time and vise verse. It is worth noting that it is a valid and important concept.

Want help to write your Essay or Assignments? Click here

Question c.1

Technology is playing a major role in changing the way children think. According to Taylor (2012), children are being affected by technology positively and negatively. Technology is been seen as a great influence of how the thinking of children develop. Taylor (2012) points out that the attentiveness; decisiveness and remembrance of children are often affected by technology. According to Taylor (2012), the manner and the level of technology use by children determines the effects caused to the thinking system of children.

Question d.2

The copyright laws are extremely important because they control and protect intellectual properties of people. The copyright laws are important because they encourage people to be innovative. With proper laws protecting innovative products of people, it will be encouraging for many to innovate since they would be protected too.  Additionally, the copyright laws are important because they ensure that there is fair play in a given industry. The fairness is brought about by the fact that only original products borne from original ideas will be availed at all times.

Competition becomes fair since copying is made illegal. Another importance of copyright laws is that they give a clear guidance of the way enforcing should be done. This is because the copyright laws provide direction regarding the prosecution of an offender. I agree that the copyright laws are important. This is because with fairness in competition, encouragement of innovation and enforcement made possible by these laws, the world becomes a better place.

Want help to write your Essay or Assignments? Click here

Question e.2

Digital divide refers to the situation whereby there are challenges in finding and using information technology. The digital divide may be brought about by the lack of the information and communication technology devices or lack of the required skills. Having seen good progress in the entrance of technology in many parts of the world, the digital divide is taking a new perspective with focus being put on how much of devices and skills people have. The digital divide is important to social scientists because it gives them an opportunity to study and come up with solutions for bridging the digital gap.

Question f.1

One of the aspects of stem cell technology is that it has a broad use of embryonic cells. This has been able to court controversy from the fact that it has unethical approach. The use of fetus from terminated pregnancies is quite controversial. Secondly, the stem cell technology is known to use healthy cells for transplant. This aspect of this technology has been seen to be the cause of some mysterious disappearances of individuals to be the target of cell harvesting.

Want help to write your Essay or Assignments? Click here

Question g.1

The European enlightenment refers to the transition of the European society from old way of doing things to an approach which has voice of reason (Gillispie, 2013). The European enlightenment period refers to the duration when the European region was meeting the modern way of doing things.

Technology is one aspect of the world that was impacted positively by the European enlightenment. The enlightenment brought about new ideas regarding technology. Additionally, technology oriented research was also carried out thus giving technology an opportunity to grow.

Want help to write your Essay or Assignments? Click here

SECTION B

Question 1

The United States Office of Technological Assessment was mandated to handle matters touching on new technology and its impact for the purpose of facilitating congress in policy making. OTA was structured to give the congress an opportunity to obtain and understand technology related information in advance, holding a non-partisan position.

According to Rodmeyer (2005), OTA was dismantled following accusations that it was not necessary because it allegedly repeated functions of other business agencies. Rodemeyer does not approve the dismantling of the Office of Technological Assessment. His disapproval is brought about by his opinion that the congress lacks technological knowhow and OTA was offering reports without bias. According to Rodemeyer (2005), technological assessment has a dilemma based on independence of the officers.

Want help to write your Essay or Assignments? Click here

Question 2

Gordon (2013) argues that the economy of America is slowing down. According to Gordon (2013), the reasons causing the slow growth of the American economy include;

  • Increasing inequality- The lack of equity experienced by the American population has led to poor economic cooperation thus slowing down the economy.
  • Dull education system-The American education system has not been able to produce competent citizens.
  • High levels of indebtedness of learners- The people in college have found themselves caught up in so much debt which they have to pay immediately they secure their jobs thus slowing their investment options.
  • High number of old people- The working American population is growing older day by day thus becoming less productive. This is being made worse by the fact that the education system is not producing productive people as before.

Poor education system has failed to push technology to higher levels, something which has ended p contributing negatively to the economic growth.

Want help to write your Essay or Assignments? Click here

Question 3

The argument that transformation from hunter-gatherer life to agriculture was the worst mistake in life is brought about by various arguments. The arguments in support of the hunter-gatherer life are that there was more leisure time, more sleep and less time for searching for something to eat. Additionally, the hunters and gatherers are argued to have enjoyed good diet emanating from the mixture of wild meat and fruits.

Question 4

In life, there are various forms of capital. These include social capital, human capital and cultural capital. Human capital refers to having the right people for a given task. It entails putting in place people of high competence levels. On the other hand, social capital refers to the individuals within a person’s social circle. Social capital is usually concerned with the input brought about by those within a social circle. Cultural capital refers to aspects of life that place individuals at the top of the social classes in the society. It entails having high levels of knowledge and skills among other attributes.

Want help to write your Essay or Assignments? Click here

Question 5

 Robinson (2010) says that there is need to have a radical approach to education.  Robinson (2010) argues that the thinking in education should have diversity. The diversity should be aimed at letting learners program their education towards having multi-solutions for a problem. Secondly, Robinson (2010) states that education should be planned in a way that supports industrial productivity.

Want help to write your Essay or Assignments? Click here

Database Characteristics and the Language of Health Information

Database
Database

Want help to write your Essay or Assignments? Click here

Database Characteristics and the Language of Health Information

Introduction

Electronic information systems are a strategic idea that any organization can adopt. Information systems help organizations to store information in an organized format that can be easily retrieved.  Using information systems in hospitals will guarantee the safety of information for both the patient and the provider by making it easy to store and access health care information. 

This is a shift from the manual hard copy store of data to the digital store of information (Beaumont, 2000).  This will enable the storage, retrieval and processing of health data easily. This data is stored on a database that keeps all the information according to the format that the administrator has assigned it. This overview is guided by the outlined questions that are highlighted.

The hospital is faced with the storage of records on paper copies and files. The patient records have to be searched through the numerous files within the hospital and its respective centres in order to access a record. Furthermore the hospital needs to have information from its centres linked to the main hospital for it to be easily accessed.

The aim of this project is to develop an electronic health information system that will capture all the information of the hospital and its centres n one database that is easy to access and reliable. This presentation gives an overview of the relevance of adopting a health management system. It highlights the relevance of shifting from the manual paper work to a digital model of record keeping.

Fundamentals of database characteristics and structure

A database is a collection of data that is related that can be produced to information that is relevant to the user. A database is large since it has to store a lot of information ranging from figure to word. Beaumont (2000) argues that data represents facts that are recorded and can be processed to produce information that is based on the facts that are stored in the database.

These data is maintained as a collection of files that are stored in a database management system.  A database management system has several programs that enable the users to enter data into the system and processing it into information that is relevant to the end user.

Want help to write your Essay or Assignments? Click here

In changing the hospital system to an EHR a database will be developed where data is entered into the database for access by several users on different platforms.  The database is self describing; it insulates programs and data, supports viewing of data from multiple sources and enables the sharing of data across several users.

The database will be easy t use since it has definition of its components like, storage format, individual files structure, and the data constrains that may exist. The database will have different users that are differentiated from the way they use the database. They can be programmers, sophisticated users, specialized users or native users.

All these users can access the database but their use is limited according to the administrator privileges that exist in the database (Versel, 2011). The administrator coordinates the whole database system and understands the needs of each user and the privileges that should be assigned to the user.

Types of medical data and information records relevant to this project   

According to Szolovits (2003), Hospitals keep different types of data that is relevant to both the government and the healthcare facility. The information is used in government planning for specific cases of illness and also in determining the patient disease patterns. The database will contain patient records and health records

Patient medical records contain the identification of the patient like, name, sex, age, residence blood type, chronic diseases, family health history and previous prescriptions ever administered to the patient. This data is entered in a database that can be shared across hospitals in a digital format through a network connecting all hospitals.

This aids in ensuring that the medication given to the patient is consistent unlike the manual system where the patient may have to narrate the prescriptions given to them (Szolovits, 2003). Individual files for each patient are supposed to be kept that help in making diagnosis for future cases of illness. The records help the patient and the doctor to make a diagnosis that best fits the situation of the patient.

On the other hand health records give a summary of the healthcare services and patterns that have been registered in the facility. These records are classified using different indicators for example they can be according to the disease that has been diagnosed or can be based on the type of drugs that have been administered to patients. These records are used by planners and policy makers to make decisions that affect the healthcare system (Versel, 2011).  The type of health information stored will depend on state requirements that have been set.

The records will be linked to the main server that is located in the main facility. Each facility will have a login ID that will be used to record the cases in that facility. This will ensure that the cases can be differentiated easily as having been registered in one centre or the other.

Want help to write your Essay or Assignments? Click here

The importance of uniform terminology, coding, and standardization of the data

Use of uniform terminology entails harmonising the health information systems that exist to use terms that are similar across. Since the health standards are equal and have been set by the WHO then it means the terminologies used should apply across the globe. The use of uniform terminologies enables the exchange of health information and data among systems in a uniform manner. Therefore the medical terms have to be understood universally (Ramez & Shamkant, 2003).

 Coding enables the practitioners and the health information system to easily interpret the data using the health information that has been built in the system. Coding is computer assisted increases the efficiency of the codes so that the codes are not human generated universally (Ramez & Shamkant, 2003).

Coding is further used in clinical health surveillance and decisions support within the healthcare. Coding makes the interpretation of data easy thus increasing health surveillance and the application of health information universally (Ramez & Shamkant, 2003)..

On the other hand Ramez & Shamkant (2003) argue that universally standardization of data ensures a uniform platform that all practitioners work on. This improves quality and efficiency during health care. Standards are defined by several organizations like ISO that determine that ensure all practitioners use a standard platform in healthcare.

Want help to write your Essay or Assignments? Click here

Information standards and organizations that may be applicable, and possibly required, for this project

In the current world where quality is a prerequisite, there are standards that are required for every organization or application that is used. ISO TC 215 sets standards that are required for electronic health records. It provides international specifications that are required which are described in ISO 18308 (Szolovits, 2003).

However there are 55 countries that have subscribed to the global authority in health care information health seven international.  Below are various standards that guide the use of electronic health records.

HL7- a texting protocol between the physician and record systems and practice management systems.

ASC X12(12), a protocol for transmitting data of patients, this is commonly used in the US.

Claims attachment standard; it guides the submissions and making of claims in a health care System.

 Personal health records standard that ensures uniformity of patient health records across countries.

The healthcare information systems vendor that offers electronic medical record products

Acummedic health: it’s a practice management and EHR application that is customised to capture the health care flow from the contact with the patient to discharge. The advantage with this system is it gives the opportunity for the user to add modules that are relevant to their agency.  It supports the HL7 standard and offers several packages like substance abuse, behavioural health, community service and many others. It has been in use since 1977 and offers better platforms for EHR (Versel, 2011).

Acumen Physician Solutions is designed for nephrologists; it offers physician guideline and ambulatory services and is wholly owned by Fresenius Medical Care North America. Therefore the services it offers are linked to Fresenius Medical Care North America (Versel, 2011).

BML MedRecords Alert LLC was designed to provide solutions that are more efficient and a better healthcare system.  It provides physicians with a digital platform to interact and gather information from patients. It allows the patients to easily access their medical information from anywhere and can be effective during emergency. Further it has medical alerts that patients can use and an online library for referral. This leads to both quality and efficiency in achieving healthcare (McBride, 2012).

Want help to write your Essay or Assignments? Click here

From the three EHR vendors, it is noted that they vary in their application but offer interaction between the patient and the healthcare provider. However Acumen Physician Solutions offers ambulatory services apart from the services that all others offer, while BML MedRecords Alert LLC offers a patient profile that the patient can search through the website and get information that can be relevant for emergency.

The patient is able to easily access the health records and can interact directly with the physician without physical contact. While Acummedic health is an open platform that enables the user to change and add the modules that are relevant, this shows why it has been in use since 1977. Therefore all the EHR vendors are good and will depend on the user preferences and requirements. The cost of installing the system will range from a minimum of $3000 US dollars.

Conclusion

Electronic Health records system helps to coordinate and make healthcare provision easy and fast to patients. According to Grooves et al (2013), health facilities use the system to increase performance and efficiency of the healthcare system. It assists the health care providers to exchange and coordinate information from one source to another.  The Electronic Health records system provider immediate access to health records and literature by practitioners that helps in diagnosing medical cases.

The sharing of information between the patient, the practitioner and other health facilities has improved the quality of care. This is the invention that has brought health care to the door step of the patient and further reduced the distance between the patient and the hospital.

References

Beaumont, R. (2000). Database and Database Management Systems. Retrieved August 12, 2009, from http://www.fhi.rcsed.ac.uk/rbeaumont/virtualclassroom/chap7/s2/dbcon1.pdf

Grooves P, Kayyali B, Knott D & Kuiken S (Jan 2013) The big data revolution in healthcare, accelerating value and innovation. Centre for US Health System Reform. McKinney & Company.

Michael McBride (July, 2012) Understanding the true costs of an EHR implementation plan. Medical Economics.

Ramez Elmasri & Shamkant Avathe (2003) Fundamentals of database systems. Fourth ed. Pearson. New York

Szolovits, P. (2003). Nature of Medical Data. MIT, Intro to Medical Informatics: Lecture-2. Retrieved on August 12, 2009 from http://groups.csail.mit.edu/medg/courses/6872/2003/slides/lecture2-print.pdf

Versel Neil (September, 2011) 12 EHR Vendors That Stand Out. InformationWeek Healthcare.?

Want help to write your Essay or Assignments? Click here

Data Security: Analysis of Effect of Cloud Computing

Data Security
Data Security

Want help to write your Essay or Assignments? Click here

Data Security: Analysis of Effect of Cloud Computing

Cloud Computing: Security, New Opportunities and Challenges

Section 1: Topic Endorsements

Cloud computing has been a very successful invention. It has created new opportunities for businesses in terms of storage spaces, access to software and other facilities. Additionally, many businesses have stated that they feel that their data is more secure if it is held in a cloud (Pearson and Yee, 2013). However, just as many businesses and experts have stated that cloud computing exposes data and makes businesses vulnerable in terms of data security (Ali, Khan and Vasilakos, 2015).

For computer scientists, understanding security issues as they relate to cloud computing is important for several reasons (Ali et al., 2015). The first and most important is the fact that in the future, any establishment will likely, partly or entirely, depend on cloud computing for storage of data (Mahmood, 2014). As a computer specialist, a computer scientist will be relied upon to monitor data, ensure security and make necessary adjustments where they are required (Verma and Kaushal, 2011).

Want help to write your Essay or Assignments? Click here

Section 2: Research Overview

The research literature on cloud computing indicates that we know cloud computing, even though safer than botnets, has created unexpected side channels because it is a shared resource (Verma and Kaushal, 2011). We also know that security incidences related to data computing are not unique, just different (Flinn, 2012). However, we do not know exact ways on how to improve data security within oud computing. However, several measures such as passwords, can be used to improve security (Pearson and Yee, 2013).

Cloud computing means that the devices used to provide required computing services do not belong to the end users (Krutz and Vines, 2010). As a result, users do not have control over how these devices are operated or who has access to them (Zhu, Hill and Trovati, 2015). Research regarding cloud computing has been extensive over the last decade as the service has continued to become more common among businesses and individuals. With each security development made, new ways to breach security come up as well (Samani, Honan, Reavis, Jirasek and CSA, 2015).

The basic research question is whether cloud computing has led to the increase or decrease of data security in enterprises, since it has become a business necessity (Nepal, Pathan and SpringerLink, 2014). The purpose of the study is to provide insight on cloud computing and how best to ensure data security. The methodology suggested for this research is equation methodology. Similarly, Moustakas- transcendental phenemonology is the suggested research model. With this in mind, the dissertation title becomes; Cloud Computing: Security, New opportunities and challenges (Schwarzkopf, Schmidt, Strack, Martin and Freisleben, 2012).

References

Ali, M., Khan, S. U., & Vasilakos, A. V. (June 01, 2015). Security in cloud computing:     Opportunities and challenges. Information Sciences, 305, 1, 357-383.

Flinn, J. (2012). Cyber foraging: Bridging mobile and cloud computing. San Rafael, Calif.: Morgan & Claypool.

Samani, R., Honan, B., Reavis, J., In Jirasek, V., & CSA (Organization),. (2015). CSA guide to cloud computing: Implementing cloud privacy and security. Waltham, MA: Syngress

Krutz, R. L., & Vines, R. D. (2010). Cloud security: A comprehensive guide to secure cloud computing. Indianapolis, Ind: Wiley Pub.

In Zhu, S. Y., In Hill, R., & In Trovati, M. (2015). Guide to security assurance for cloud computing.

In Mahmood, Z. (2014). Cloud computing: Challenges, limitations and R & D solutions.

Pearson, S., & Yee, G. (2013). Privacy and security for cloud computing. London: Springer.

In Nepal, S., In Pathan, M., & SpringerLink (Online service). (2014). Security, Privacy and Trust  in Cloud Systems. Berlin, Heidelberg: Springer Berlin Heidelberg.

Verma, A., & Kaushal, S. (January 01, 2011). Cloud Computing Security Issues and Challenges:  A Survey.

Schwarzkopf, Roland, Schmidt, Matthias, Strack, Christian, Martin, Simon, & Freisleben, Bernd. (2012). Increasing virtual machine security in cloud environments. (BioMed Central Ltd.) BioMed Central Ltd.

Want help to write your Essay or Assignments? Click here