User Tools

Site Tools



This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
start [2017/01/12 13:57]
start [2018/04/26 13:46] (current)
zoza [Python and SOM]
Line 1: Line 1:
 +===== Python and SOM =====
 + - python module by Vahid Moosavi of CAAD, **sompy**
 + - another SOM python implementation,​ **somoclu**:​ https://​​en/​stable/​index.html ​
 + - SOM Java Toolbox created at TU Wien http://​​dm/​somtoolbox/ ​
 + - Twitter sentiment analysis with Python: https://​​another-twitter-sentiment-analysis-with-python-part-5-50b4e87d9bdd
 ===== Scraping and mining twitter streams ===== ===== Scraping and mining twitter streams =====
Line 5: Line 15:
 following these tutorials: [[http://​​posts/​2014/​07/​twitter-analytics/​|Introduction to Text Mining using Twitter Streaming API and Python]] & [[http://​​notes/​streaming-data-from-twitter.html|a beginners guide to streamed data from Twitter]] following these tutorials: [[http://​​posts/​2014/​07/​twitter-analytics/​|Introduction to Text Mining using Twitter Streaming API and Python]] & [[http://​​notes/​streaming-data-from-twitter.html|a beginners guide to streamed data from Twitter]]
-1. get Twitter API keys from https://​  +  - get Twitter API keys from https://​  
-2. **scrape tweet stream** using [[twitter-streaming-py|python streaming script]] +  ​- ​**scrape tweet stream** using [[twitter-streaming-py|python streaming script]] 
-3. **mine the tweets** using [[mine-tweets-py|python mining script]]+  ​- ​**mine the tweets** using [[mine-tweets-py|python mining script]] 
 +A resourceful guide for Twitter textmining in Python: https://​​2015/​03/​23/​mining-twitter-data-with-python-part-4-rugby-and-term-co-occurrences/​  
 +===== Scraping and mining Dezeen articles ===== 
 +  * with **scrapy** 
 +The setup: python3 in the conda environment  
 +<​code>​$ conda create -n bots python=3.4 # create a virtual environment named "​bots"​ 
 +$ source activate bots # activate the environment;​ check if active: conda info --envs 
 +$ conda install -n bots -c conda-forge scrapy # install scrapy for the named environment 
 +Run scrapy directly from the shell:  
 +<​code>​$ scrapy startproject dezeen # start a project</​code>​ 
 +Detailed instructions here: https://​​en/​latest/​intro/​tutorial.html#​creating-a-project 
 +Create a _spider_ in the folder dezeen/​dezeen/​spiders/​ within which you will create a class that will declare its' name. This name will be used to call the spider from the console:  
 +<​code>​$ scrapy crawl spider_name</​code>​ 
 +It is also important to declare fields in pages that will be scraped. This is done in the dezeen/​ file, using eg (the Class is already declared when you start project). 
 +<code python>​Class DezeenItem(Item):​ 
 +title = Field() 
 +link = Field() 
 +description = Field() 
 +These fields will be later used as part of the item dictionary (e.g. item['​link'​])
 ====== DOCTORAL RESEARCH ====== ====== DOCTORAL RESEARCH ======
Line 324: Line 364:
 ====== other ====== ====== other ======
 +[[ways-to-run-python|ways to run Python]]
 [[server maintenance]] [[server maintenance]]
start.1484229460.txt.gz · Last modified: 2017/01/12 13:57 by zoza