| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | |||
| 5 | 6 | 7 | 8 | 9 | 10 | 11 |
| 12 | 13 | 14 | 15 | 16 | 17 | 18 |
| 19 | 20 | 21 | 22 | 23 | 24 | 25 |
| 26 | 27 | 28 | 29 | 30 |
- Jupyter Notebook
- noob
- MySQL
- Anime
- github
- Methods
- GIT
- Linux
- cached
- workbench
- Filecoin
- MyAnimeList
- python
- project
- strings
- Classes
- JSON
- Django
- Blockchain
- directories
- commands
- pandas
- SCV
- crawler
- ansible
- .gitignore
- forks
- basics
- DATABASE
- API
- Today
- Total
목록Project (9)
제니 블로그
Since we have our project folders ready, it's time to set it up. This includes adding urlpatterns in urls.py, and altering the settings.py and view.py files. The most basic setup is the urlpatterns named index in our root folder from our app folder. urlpatterns = [ path('', views.index, name='index') ] This sets up the first pattern matches the root URL ('') and maps it to the index view functio..
Since we somewhat have the backend code ready to go, we choose a framework to start developing a web applicaiton. We chose Django, an open-source python web framework that helps developers build database-driven websites. First we need to install it. This process is very easy. As long as we have python installed, we type this in the command line. $ pip install django Ta-Da ~ Done. Now we need to ..
After creating a database schema from the last post, it's time to put in the data that we need into that database. The data we will be collecting includes data from web scraping and using GET API calls. First to connect, we need a valid database going. We have the user, password, host, database in a separate file because those kind of information should be hidden. # Connect to the database -> th..
Database One of or if not the most important tool to store data! For this project, we will be using mysql, which is already set up as, with the root and two users, Jenny and Liam! The general idea is to have a database schema like this in a relational database. For this, we will be using mysql, with the python library mysql.connector, and the workbench that is installed on our local computers. T..
02/21/2023 One of the important steps of sentimental analysis is text preprocessing. After collecting data in the form of text, we need to remove any irrelevant informations (stop words / punctuations) and convert it into a standarized format, like in lowercase. Stop words Stop words are words that are usually removed from text before processing it for analysis or indexing. It includes words lik..
2023/02/19 To test getting the API, the forum topic data we need to use the following. f"https://api.myanimelist.net/v2/forum/topic/{topic_id}" To test out only using one of the IDs that we got from the parser, we used the same libraries: import requests import json import pandas as pd Example of the result for a simple test was this: response = requests.get(f"https://api.myanimelist.net/v2/foru..
When an episode airs, MAL posts discussions for every episode. For example, the anime Bleach has 366 episode discussion posts. Each of them has an unique forum ID, which we can get from web crawling. We plan on getting the ID of each forum post, and then apply the API to get all the comments from the users that shows their opinions and feelings about that certain episode. This is an experiement ..
02/14/2023 For the future, we plan on scrapping reviews from anime to further the sentimental analysis. As an example, we got the reviews for the anime Fullmetal Alchemist, and created a web crawler. IDE: Jupyter Notebook (Python) import requests from bs4 import BeautifulSoup base_url = "https://myanimelist.net/anime/5114/Fullmetal_Alchemist__Brotherhood/reviews" review_texts = [] # from pages 1..