It's a Data Science course at the PHD level. The assignment file is called "activity7.docx"The "scholarly_Reference_from_school_library_activity7.pdf" is to be used as one of the required references...

1 answer below »



It's a Data Science course at the PHD level. The assignment file is called "activity7.docx"













The "scholarly_Reference_from_school_library_activity7.pdf" is to be used as one of the required references from the school library as started in the assignment file (activity7).













NB:



This assignment is a continuation of the assignments in order 117693, and order 117692













I will prefer to have this assignment to be done by the exact same expert who just completed order 117693, and order 117692























Please review and let me know.













Thanks



Activity7 This week, you will create a data democratization plan for the hypothetical use case described in the data management plan created in activity5 and expanded on in activity6. Be sure to include the following information in your plan: · Describe best practices for data democratization to enable a group of researchers to access needed well-curated datasets (or raw data sources). · Assume that data democratization has been an issue in the past in this hypothetical use case. Explain how the data culture within an organization will be transformed to support data democratization initiatives. · Discuss the specific policies and procedures that will ensure that everyone has access to needed data. Include what existing barriers between data and stakeholders will be removed or watched for and what structure will be used to ensure that all stakeholders can access, understand, and use the data in their possession. Length: 7 to 8-page technical report, not including title and references pages References: Include a minimum of 3 scholarly references (be sure that at least one of the three is a peer-reviewed research study involving data usage and retention planning from the school library to support your ideas). NB: Scholarly reference: check attached PDF file. The Democratization of Big Data Sean Fahey* In recent years, it has become common for discussions about managing and analyzing information to reference “data scientists” using “the cloud” to analyze “big data.” Indeed these terms have become so ubiquitous in discussions of data processing that they are covered in popular comic strips like Dilbert and the terms are tracked on Gartner’s Hype cycle.1 The Harvard Business Review even labeled data scientist as “the sexiest job of the 21st century.”2 The goal of this paper is to demystify these terms and, in doing so, provide a sound technical basis for exploring the policy challenges of analyzing large stores of informa- tion for national security purposes. It is worth beginning by proposing a working definition for these terms before exploring them in more detail. One can spend much time and effort developing firm definitions for these terms – it took the National Institutes of Science and technology several years and sixteen versions to build consensus around the definition of cloud computing in NIST Special Publication 800-1453 – the purpose here is to provide definitions that will be useful in furthering discussions of policy implications. Rather than defining big data in absolute terms (a task made nearly impos- sible by the rapid pace of advancements in computing technologies) one can define big data as a collection of data that is so large that it exceeds one’s capacity to process it in an acceptable amount of time with available tools. This difficulty in processing can be a result of the data’s volume (e.g., its size as measured in petabytes4), its velocity (e.g., the number of new data elements added each second), or its variety (e.g., the mix of different types of data including structured and unstructured text, images, videos, etc . . . ).5 Examples abound in the commercial and scientific arenas of systems manag- ing massive quantities of data. YouTube users upload over one hundred hours of video every minute,6 Wal-Mart processes more than one million transactions each hour, and Facebook stores, accesses and analyzes more than thirty petabytes * DHS Programs Manager, Applied Physics Lab, and Vice Provost for Institutional Research, The Johns Hopkins University. © 2014, Sean Fahey. 1. Scott Adams, Dilbert, DILBERT (July 29, 2012), http://dilbert.com/strips/comic/2012-07-29/; Gartner’s 2013 Hype Cycle for Emerging Technologies Maps Out Evolving Relationship Between Humans and Machines, GARTNER (Aug. 19, 2013), http://www.gartner.com/newsroom/id/2575515. 2. Thomas H. Davenport & D. J. Patil, Data Scientist: The Sexiest Job of the 21st Century, HARV. BUS. REV., Oct. 2012, at 70. 3. Nat’l Inst. of Standards & Tech, Final Version of NIST Cloud Computing Definition Published, NIST (Oct. 25, 2011), http://www.nist.gov/itl/csd/cloud-102511.cfm. 4. One petabyte is equal to one million gigabytes. 5. Edd Dumbill, What is Big Data, O’REILLY (Jan. 11, 2012), http://strata.oreilly.com/2012/01/what-is- big-data.html. 6. Statistics, YOUTUBE, http://www.youtube.com/yt/press/statistics.html. 325 of user-generated data.7 In scientific applications, the Large Hadron Collider generates more than fifteen petabytes of data annually which are analyzed in the search for new subatomic particles.8 Looking out into space rather than inward into particles, the Sloan Digital Sky Survey mapped more than a quarter of the sky gathering measurements for more than 500 million stars and galaxies.9 In the national security environment, increasingly high quality video and photo sensors on unmanned aerial vehicles (UAVs) are generating massive quantities of imagery for analysts to sift through to find and analyze targets. For homeland security, the recent Boston marathon bombing investigation proved both the challenge and potential utility of being able to quickly sift through large volumes of video data to find a suspect of interest. While the scale of the data being collected and analyzed might be new, the challenge of finding ways to analyze large datasets is a problem that has been around for at least a century. The modern era of data processing could be considered to start with the 1890 census where the advent of punch card technology allowed the decennial census to be completed in one rather than eight years.10 World War II spurred the development of code breaking and target tracking computers, which further advanced the state of practice in rapidly analyzing and acting upon large volumes of data.11 The Cold War along with commercial interests, further fueled demand for increasingly high performance computers that could solve problems ranging from fluid dynamics and weather to space science, stock trading and cryptography. For decades the United States government has funded research to accel- erate the development of high performance computing systems that could address these challenges. During the 1970s and 1980s this investment yielded the development and maturation of supercomputers built around specialized hardware and software (e.g., Cray 1). In the 1990s, a new generation of high performance computers emerged based not on specialized hardware but instead clustering of mass-market commodity PCs (e.g., Beowulf clusters).12 This cluster computing approach sought to achieve high performance without the need for, and cost of, specialized hardware. The development and growth of the internet in the 1990s and 2000s led to the development of a new wave of companies including Google, Amazon, Yahoo and Facebook that captured and needed to analyze data on a scale that had been previously infeasible. These companies sought to understand the relationships among pieces of data such as the links between webpages or the purchasing 7. A Comprehensive List of Big Data Statistics, WIKIBON BLOG (Aug. 1, 2012), http://wikibon.org/blog/ big-data-statistics/. 8. Computing, CERN, http://home.web.cern.ch/about/computing. 9. Sloan Digital Sky Survey, SDSS, http://www.sdss.org/. 10. Uri Friedman, Anthropology of an Idea: Big Data, FOREIGN POLICY, Nov. 2012, at 30, 30. 11. Id. 12. See Thomas Sterling & Daniel Savarese, A Coming of Age for Beowulf-Class Computing, in 1685 LECTURE NOTES IN COMPUTER SCIENCE: EURO-PAR ’99 PARALLEL PROCESSING PROCEEDINGS 78 (1999). 326 [Vol. 7:325JOURNAL OF NATIONAL SECURITY LAW & POLICY patterns of individuals, and use that knowledge to drive their businesses. Google’s quest for a better search engine led their index of web pages to grow from one billion pages in June 2000 to three billion in December 2001 to eight billion in November 2004.13 This demand for massive scale data processing led to a revolution in data and the birth of the modern “big data” era. These companies developed scalable approaches to managing data that built upon five trends in computing and business. The result of their efforts, in addition to the development of several highly profitable internet companies, was the democrati- zation of big data analysis – a shift which resulted in big data analysis being available not only to those who had the money to afford a supercomputer, or the technical skill to develop, program and maintain a Beowulf cluster, but instead to a much wider audience. The democratization of big data analysis was possible because of multiple independent ongoing advances in the computing including the evolution of computer hardware, new computer architectures, new operating software and programming languages, the open source community, and new business models. The growth in computing capability begins with advances at the chip and storage device level. For nearly fifty years, the semiconductor industry has found ways to consistently deliver on Gordon Moore’s 1965 prediction that the number of components on an integrated circuit would double every year or two. This has led to the development of increasingly powerful computer chips and, by extension, computers. At the same time, manufacturers of storage devices like hard drives have been able to also achieve exponential increases in the density of storage along with exponential decreases in the cost of storage.14 “Since the introduction of the disk drive in 1956, the density of information it can record has swelled from a paltry 2,000 bits to 100 billion bits (gigabits), all crowded in the small space of a square inch. That represents a 50-million-fold increase.”15 This increase in computing power and storage capacity has out- stripped the needs of most individuals and programs and led to a second key innovation underlying modern big data – virtualization. Virtualization is the process of using one physical computer to host multiple virtual computers. Each virtual computer operates as though it were its own physical computer even though it is actually running on shared or simulated hardware. For example, rather than having five web servers each operating at 10% capacity, one could run all five web servers on one virtualized server operating at 50% capacity. This shift from physical to virtual machines, has created the ability to easily add new storage or processing capabilities on demand when needed and to modify the arrangement of those computers virtually rather than having to physically run new network cabling. This has led 13. Our History in Depth, GOOGLE, http://www.google.com/about/company/history/. 14. Chip Walters, Kryder’s Law, SCI. AM., Aug. 2005, at 32; see also Matt Komorowski, A History of Storage Cost, MKOMO BLOG, http://www.mkomo.com/cost-per-gigabyte (graphing the decrease in hard drive cost per gigabyte over time). 15. Walters, supra note 16, at 32. 2014] 327THE DEMOCRATIZATION OF BIG DATA to the growth of “cloud computing” which NIST defines as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal manage- ment effort or service provider interaction.”16 Providers such as Amazon have created services (e.g., Amazon Web Services) to allow individuals and com- panies to benefit from cloud computing by purchasing storage and processing as required. Rather than investing up front in a datacenter and computing hardware, companies can now purchase computing resources as a utility when needed for their business. While increased computing power, virtualization, and the development of cloud computing business models were fundamental to the advent of the cur- rent big data era, they were not sufficient. As late as 2002, analysis of large quantities of data still required specialized supercomputing or expensive enter- prise database hardware. Advances in cluster computing showed promise but had not yet been brought to full commercial potential. Google changed this between 2003 and 2006 with the publication of three seminal papers that together laid the foundation for the current era of big data. Google was conceived from its founding to be a massively scalable as a web search company.17 It needed to be able to index billions of webpages and analyze the connections between those pages. That required finding new web pages, copying their contents onto Google servers, identifying the content of the pages, and divining the degree of authority of a page. The PageRank algorithm developed by Larry Page laid out a mathematical approach to indexing the web but required a robust information backbone to allow scaling to internet size. Google’s first step to addressing this scalability challenge was, around 2000, to commit to using commodity computer hardware rather than specialized computer hardware. In doing so the company assumed failures of computers and disk drives would be the norm and so had to design a file system to have constant monitoring, error detection, fault tolerance, and automatic recovery. Google developed a distributed file system (called the Google File System) that accomplished this by replicating data across multiple computers so that the data would not be lost if any one computer or hard drive were to fail. The Google File System managed the process of copying and maintaining awareness of file copies in the system so programmers didn’t have to.18 16. NAT’L INST. OF STANDARDS & TECH, THE NIST DEFINITION OF CLOUD COMPUTING, SPECIAL PUBLICA- TION 800-145, at 2 (2011), available at http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145. pdf. 17. Sergey Brin & Lawrence Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine, COMP. NETWORKS & ISDN SYS., Apr. 1998, at 107, available at http://infolab.stanford.edu/backrub/google. html. 18. SANJAY GHEMAWAT, HOWARD GOBIOFF, & SHUN-TAK LEUNG, THE GOOGLE FILE SYSTEM (2003), available at http://static.googleusercontent.com/media/research.google.com/en/us/archive/gfs-sosp 2003.pdf. 328 [Vol. 7:325JOURNAL OF NATIONAL SECURITY LAW & POLICY Google second innovation to address scalability was to develop a new programming language to allow processing of the data in the distributed
Answered 5 days AfterMar 21, 2023

Answer To: It's a Data Science course at the PHD level. The assignment file is called "activity7.docx"The...

Banasree answered on Mar 26 2023
29 Votes
2
1. Ans)
Data democratization refers to the process of making data more accessible and available to a wider group of people, with the aim of empowering them to make data-driven decisions. In the context of research, data democratization can be crucial in enabling multiple researchers to access and analyze datasets, leading to a more collaborative and transparent research process. Here are some best practices to consider when implementing data democratization in any
research:
1. Standardization of data: Before democratizing research data, it's important to standardize it so that all researchers can easily access and interpret the data. This involves ensuring that the data is in a common format, and that all variables and metadata are clearly defined. This will help to prevent confusion and errors when researchers are analyzing the data.
2. Clear governance policies: A clear governance policy outlining who can access the data, what the data can be used for, and how it should be managed is essential to ensure that data is not misused or abused. A governance policy can also help to protect sensitive data and ensure that data is used ethically and in line with legal and ethical frameworks.
3. Establish data access protocols: Researchers need to know how to access the data, and the process of requesting and obtaining access should be clearly defined. This can include setting up a data access portal (Kitchin, n.d.), providing researchers with access credentials, and providing clear instructions on how to use the data access system.
4. Develop clear data sharing agreements: When sharing data with other researchers, it's important to develop clear data sharing agreements that outline how the data can be used, who has access to the data, and what the data can be used for. This can help to prevent misuse of the data and ensure that all parties are aware of their responsibilities when using the data.
5. Enable data discovery: Researchers need to be able to find the data they need in order to conduct their research. This can be facilitated by creating a data catalog that describes the available datasets and provides information on how to access them. The catalog should be searchable and include relevant metadata to help researchers find the data they need.
6. Provide training and support: Researchers may need training on how to use the data access system or how to analyze the data. Providing training and support can help to ensure that researchers are able to use the data effectively and can minimize errors or misuse of the data.
7. Encourage collaboration and sharing: Data democratization (T. Malamud, n.d.)can facilitate collaboration and sharing between researchers, leading to more innovative and impactful research. Researchers should be encouraged to share their findings and collaborate on research projects using the available data.
By implementing these best practices, data democratization (Ma, n.d.) can be a powerful tool in enabling researchers to access and analyze well-curated datasets. By promoting collaboration and transparency, data democratization can help to advance research and facilitate the development of new ideas and solutions.
2.Ans)
Assuming that data democratization (Kallberg, n.d.) has been an issue in the past in this hypothetical use case, it is essential to transform the data culture within the organization to support data democratization initiatives. The following are the ways in which the data culture within the organization can be transformed to support data democratization initiatives:
1. Create a Data-Driven Culture: The organization should create a data-driven culture where data is used to make informed decisions. This can be done by creating awareness about the importance of data in decision-making processes. The organization should also create a framework for data governance to ensure that data is managed and used effectively.
2. Encourage Collaboration: The organization should encourage collaboration between departments and individuals. This can be done by creating a platform where individuals can share data, insights, and best practices. The organization should also encourage cross-functional teams to work together to solve problems that require data insights.
3. Educate and Train Employees: The organization should provide education and training to employees on how to use data. This can be done by organizing workshops, training sessions, and webinars. The organization should also provide access to resources such as data libraries, online courses, and books.
4. Use Data Visualization: The organization should use data visualization tools to make data accessible and understandable to everyone. This can be...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here