My topic is, (1) Crypto currency price predictions using Bayesian Neural Network as our algorithm. Your Assignment Search Strategies ●OneSearch ●Library Databases, library.csun.edu ●Google Scholar...

i attached all the file you need, you are going to do part 2 project based on part 1 project that i did in the past and i attched it below. you only gonna do for the first topic which is, cryptocurrency price prediction!! you dont have to do for topic 2 i did it by myself. use only academic library, library.csun.edu or google scholar.


My topic is, (1) Crypto currency price predictions using Bayesian Neural Network as our algorithm. Your Assignment Search Strategies ●OneSearch ●Library Databases, library.csun.edu ●Google Scholar ●Datasets Part 2: finalize a project topic from results of part 1, select a suitable tool, find a small size of dataset and produce a preliminary result. (15 %). Purposes: -To identify the related tools. -To learn how to use a new library, an existing tool, or develop tools by your own and solve problems of installing software -To be able to find a small dataset (at least 30 subjects) and produce preliminary results. -To compare each tool with their pros and cons with different size of datasets. -To measure efficiency of each tool and record the results. -To finalize the project topic if previous topic does not work or cannot find related tool. -The finalized combination of topic and tool should be different from ones reported on other papers. These would be counted as your contributions. In other words, if some paper has used ID3 tree on weather data, you CANNOT use ID3 tree on weather again. Tasks: around 20 hours workload could be divided between team members · Step 1 : Run simulations on the combinations of tools/datasets that you submitted for part1 . ( Time hours: at least 6 ~ 8 hours ) . · Step 2: Choose the most feasible topic from two topics based on the results of previous step. ( Time required: depending on the level of success).In other words, you need to be able to run algorithms and tools on the datasets. Ex, you choose one deep learning algorithm and you cannot run it on your computer or it crashes after running two days. You might need to chose another topic. If you run out of two topics, then you might need to repeat what you have done before to search another suitable topic. Here are other things you need to consider for finalizing your topic: · Could you find related public datasets for you to use ? For example, could you find related public weather data in Los Angeles ? could you find public crime rate ? If you can not find related public data, you might need to choose another application field or even project topic. · If you cannot find public dataset, can you create questionnaire to collect data from other people ( must have more than 30 instances, as many as possible) ? · If you cannot find public tools, can you email authors for possible tools ? · Step 3: check if you dataset have more than 30 instances. ( Time required: 0.5 hours) A decent dataset should have 100 ~200 samples. If you cannot find enough samples, you might need to repeat step 1 &2 until you can have enough instances. · Step 4: Read more details about those 15 papers you have chosen for part 1 before and study their pros, cons, limitations of their research. Brainstorm with your partner about new perspective you could contribute to the project and write them down. (Time required: 2 ~ 3 hours ). · New perspective is not limited to new invention. Here are possible contributions to the projects. Ex, you could apply common algorithms on datasets in another field. · Ex: Contribute by providing additional data preprocessing and results show a better test accuracy · Ex: Very few people have applied algorithms on falling type of seniors so you contribute by conducting the experiments in a new field. · Ex: You optimize learning models · Ex: You find most influential features from a better feature selection process. · Ex: You decide and propose a better feature selection process...etc. · Step 5: Once finalizing a topic, run the work and generate preliminary work to verify your hypothesis. ( Time required; 1~2 hours) Then, if a team has fewer than 100 samples, one’s team could test on scope of the project by repeating those 30 subjects up to create 300, 3000 or more simulating data and run the tool. Estimate how much time it takes to run these data. · Step 6: Report your work in a MS word draft ( at least 3 pages ) ( Time required: 3~4 hours) in a CSCSU format that is posted under " Resources for doing project". You can check " Good student examples - A grades" under "Resources of doing project" module as references. You can also find average and bad examples there. · Step 7: Make sure all figures and tables should be created in black & white color. (Time required: 1~2 hours) · Step 8: Submit the draft via Canvas for plagiarism checking. The similarity rate checked by Turnitin should not be over 20%. Criteria for Success: your draft must contain 6 sections (1) Introduction · Describe a problem that one’s team plans to solve and give a brief background of this problem. · List themes (i.e. commonalities ) of related papers and describe what they have done to solve these problems. · Describe predictive models of these papers briefly. · List debatable places or limitations · Justify the reasons of your plan, and hypothesis . Why is your work important ? what has not be done by previous work ? ( A important selling point !) (2) Dataset: · Describe source of your dataset. Ex: did you find it from public database, repository or create your own questionnaires to collect data ? Where did you find this dataset ? · Describe characteristics of your dataset: · Ex: assume we study a dataset of breast cancer patients. · how big your is your dataset ? · what are descriptive features of the dataset ? age, family history, weights, country… · how many descriptive features are there ? what types of data are they ? ex: binary, textual..etc. (3) Methodology: · Provide descriptions of models that are used in the project. · Ex: describe what an SVM is. · Description of all data preprocessing processes if any · Ex: some features are removed due to a high missing rate. A imputation has been done dataset by a mean value….etc. · Tools/software/platforms/computing machine: what tools did you use to run the simulation ? Ex: Tensorflow on a laptop. What computing power did you use ? a laptop, cloud server? cluster server ? Ex: use Python, Pandas library, and Scikit-learn …etc and describe what they are briefly. (4) Preliminary results (time, test accuracies, etc) · Describe results you have obtained. · Show your results in tables, charts or graphs in black and white color to increase readability ( Do not use other colors) · (5) Discussion: new contributions (Key component) · Describe your contribution and new perspective that are different than previous works. Ex, you use the latest tool, algorithm, improved formulas, add new features, improve feature ranking, improve time, performance…etc. The novelty of your perspective would be evaluated. · Justify how your findings might support or oppose your original hypotheses. · (6) Reference list ( >=15 related papers, in an additional page). · You might find other more related papers. list all papers you cited for figures/pictures or contents in the referenece list. There should be a one-to-one mapping relationship between citation and referenced papers. · · Format and requirements: · At least 3 pages of content in CSCSU format for total for section (1) ~ (5) excluding reference list . · Reference list is one of the following acceptable formats: CSCSU ( You could find the format posted on Canvas) · Each graph, chart, and picture that is created and embedded in your draft and must be in a black and white color. No colorful figures are allowed. They should be saved in individual files and will be submitted for part 3 later. · You finalized the project topic and found related tool. In other words, if you change your project topic after this assignment in part 3, it will result in penalties. · Your combination of topic and dataset should be different from ones that reported on other papers so these would be your contributions. For example, some paper used “ ID3 decision tree” on weather data, you could apply the same algorithm on crime rate data or use ID3 to predict baseball player’s injury. You cannot simply repeat exact the same combination of algorithm and dataset from the same paper. You need to provide new contributions. · Your contributions need to be justified and reasonable. Ex, if everyone previous work uses random forest to predict lung cancer, your team decide to choose another algorithm (e.g. SVM) that has not been tested and studied before, you need to find academic papers that support this decision and justify this choice. · You could find related public data and that public data is big enough. You successfully run the tool for a size of 30 real data and simulating the cases of 300, or more data. You could successfully plot a prediction of data of bigger scale · Your datasets can run on the learning algorithms and models and generate results as mentioned on your draft. For part 3, you will need to submit all the source code, datasets, analyses and all related raw data later. We will have a TA to verify your results based on your submission. If your source code cannot run on the datasets and generate the matched results, then the draft would be considered as cheating. So you need to make sure your source code, tools , datasets are all valid for our TA to verify. · You upload all required files and write the final report in CSCSU format with at least 3 pages excluding the reference list. When you include the reference list, it should be 4 pages. A draft fails to be 4 pages long can fail this part of assignment. · Your writing is free of grammatical and spelling errors. This is part 1 project which I did in the past. Below is the description (question) of part 1 project, you don't have to do anything for this question, i already did by myself and i attached my work, _part(1) answer.pdf , you should be based on my work. you are going to do for topic 1, which is cryptocurrency price prediction, forget about topic 2 i’ll do it by myself. Machine Learning Project Part 1: Study previous work from academic journals Purpose: · To develop university-level academic project skills · To assess the legitimacy of project sources · To identify the scope of the project. Ex to choose an appropriate topic based on the time and resources frames. · To know how to search academic papers via databases at CSUN library or google scholar. · Apply knowledge to real-world applications in a new but related algorithm/field. · Compare
Mar 17, 2022
SOLUTION.PDF

Get Answer To This Question

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here