Unsupervised Approach for Automatic Keyword Extraction using Text Features.
YAKE! is a light-weight unsupervised automatic keyword extraction method which rests on text statistical features extracted from single documents to select the most important keywords of a text. Our system does not need to be trained on a particular set of documents, neither it depends on dictionaries, external-corpus, size of the text, language or domain. To demonstrate the merits and the significance of our proposal, we compare it against ten state-of-the-art unsupervised approaches (TF.IDF, KP-Miner, RAKE, TextRank, SingleRank, ExpandRank, TopicRank, TopicalPageRank, PositionRank and MultipartiteRank), and one supervised method (KEA). Experimental results carried out on top of twenty datasets (see Benchmark section below) show that our methods significantly outperform state-of-the-art methods under a number of collections of different sizes, languages or domains. In addition to the python package here described, we also make available a demo and an API.
- Unsupervised approach
- Corpus-Independent
- Domain and Language Independent
- Single-Document
YAKE!, generically outperforms, statistical methods [tf.idf (in 100% of the datasets), kp-miner (in 55%) and rake (in 100%)], state-of-the-art graph-based methods [TextRank (in 100% of the datasets), SingleRank (in 90%), TopicRank (in 70%), TopicalPageRank (in 90%), PositionRank (in 90%), MultipartiteRank (in 75%) and ExpandRank (in 100%)] and supervised learning methods [KEA (in 70% of the datasets)] across different datasets, languages and domains. The results listed in the table refer to F1 at 10 scores. Bold face marks the current best results for that specific dataset. The column "Method" cites the work of the previous (or current) best method (depending where the bold face is found). The interested reader should refer to this table in order to see a detailed comparison between YAKE and all the state-of-the-art methods.
Extracting keywords from texts has become a challenge for individuals and organizations as the information grows in complexity and size. The need to automate this task so that texts can be processed in a timely and adequate manner has led to the emergence of automatic keyword extraction tools. Despite the advances, there is a clear lack of multilingual online tools to automatically extract keywords from single documents. Yake! is a novel feature-based system for multi-lingual keyword extraction, which supports texts of different sizes, domain or languages. Unlike other approaches, Yake! does not rely on dictionaries nor thesauri, neither is trained against any corpora. Instead, it follows an unsupervised approach which builds upon features extracted from the text, making it thus applicable to documents written in different languages without the need for further knowledge. This can be beneficial for a large number of tasks and a plethora of situations where the access to training corpora is either limited or restricted.
Campos, R., Mangaravite, V., Pasquali, A., Jorge, A., Nunes, C., & Jatowt, A. (2018). A Text Feature Based Automatic Keyword Extraction Method for Single Documents Proceedings of the 40th European Conference on Information Retrieval (ECIR'18), Grenoble, France. March 26 – 29. https://link.springer.com/chapter/10.1007/978-3-319-76941-7_63
Campos, R., Mangaravite, V., Pasquali, A., Jorge, A., Nunes, C., & Jatowt, A. (2018). YAKE! Collection-independent Automatic Keyword Extractor Proceedings of the 40th European Conference on Information Retrieval (ECIR'18), Grenoble, France. March 26 – 29 https://link.springer.com/chapter/10.1007/978-3-319-76941-7_80
There are three installation alternatives.
- To run YAKE! in the command line (say, to integrate in a script), but you do not need an HTTP server on top, you can use our simple YAKE! Docker image. This container will allow you to run text extraction as a command, and then exit.
- To run YAKE! as an HTTP server featuring a RESTful API (say to integrate in a web application or host your own YAKE!), you can use our RESTful API server image. This container/server will run in the background.
- To install YAKE! straight "on the metal" or you want to integrate it in your Python app, you can install it and its dependencies.
First, install Docker. Ubuntu users, please see our script below for a complete installation script.
Then, run:
docker run liaad/yake:latest -ti "Caffeine is a central nervous system (CNS) stimulant of the methylxanthine class.[10] It is the world's most widely consumed psychoactive drug. Unlike many other psychoactive substances, it is legal and unregulated in nearly all parts of the world. There are several known mechanisms of action to explain the effects of caffeine. The most prominent is that it reversibly blocks the action of adenosine on its receptor and consequently prevents the onset of drowsiness induced by adenosine. Caffeine also stimulates certain portions of the autonomic nervous system."
Example text from Wikipedia
This install will provide you a mirror of the original REST API of YAKE! available here.
docker run -p 5000:5000 -d liaad/yake-server:latest
After it starts up, the container will run in the background, at http://127.0.0.1:5000. To access the YAKE! API documentation, go to http://127.0.0.1:5000/apidocs/.
You can test the RESTful API using curl
:
curl -X POST "http://localhost:5000/yake/" -H "accept: application/json" -H "Content-Type: application/json" \
-d @- <<'EOF'
{
"language": "en",
"max_ngram_size": 2,
"number_of_keywords": 10,
"text": "Sources tell us that Google is acquiring Kaggle, a platform that hosts data science and machine learning competitions. Details about the transaction remain somewhat vague , but given that Google is hosting its Cloud Next conference in San Francisco this week, the official announcement could come as early as tomorrow. Reached by phone, Kaggle co-founder CEO Anthony Goldbloom declined to deny that the acquisition is happening. Google itself declined 'to comment on rumors'. Kaggle, which has about half a million data scientists on its platform, was founded by Goldbloom and Ben Hamner in 2010. The service got an early start and even though it has a few competitors like DrivenData, TopCoder and HackerRank, it has managed to stay well ahead of them by focusing on its specific niche. The service is basically the de facto home for running data science and machine learning competitions. With Kaggle, Google is buying one of the largest and most active communities for data scientists ..."
}
EOF
Example text from Wikipedia
Python3
To install Yake using pip:
pip install git+https://github.com/LIAAD/yake
To upgrade using pip:
pip install git+https://github.com/LIAAD/yake –upgrade
How to use it on your favorite command line
Usage: yake [OPTIONS]
Options:
-ti, --text_input TEXT Input text, SURROUNDED by single quotes(')
-i, --input_file TEXT Input file
-l, --language TEXT Language
-n, --ngram-size INTEGER Max size of the ngram.
-df, --dedup-func [leve|jaro|seqm]
Deduplication function.
-dl, --dedup-lim FLOAT Deduplication limiar.
-ws, --window-size INTEGER Window size.
-t, --top INTEGER Number of keyphrases to extract
-v, --verbose
--help Show this message and exit.
How to use it on Python
import yake
text_content = """
Sources tell us that Google is acquiring Kaggle, a platform that hosts data science and machine learning
competitions. Details about the transaction remain somewhat vague , but given that Google is hosting
its Cloud Next conference in San Francisco this week, the official announcement could come as early
as tomorrow. Reached by phone, Kaggle co-founder CEO Anthony Goldbloom declined to deny that the
acquisition is happening. Google itself declined 'to comment on rumors'.
"""
# assuming default parameters
simple_kwextractor = yake.KeywordExtractor()
keywords = simple_kwextractor.extract_keywords(text_content)
for kw in keywords:
print(kw)
# specifying parameters
custom_kwextractor = yake.KeywordExtractor(lan="en", n=3, dedupLim=0.8, windowsSize=2, top=20)
keywords = custom_kwextractor.extract_keywords(text_content)
for kw in keywords:
print(kw)
Credits to https://github.com/silvae86
https://github.com/boudinfl/pke - pke
is an open source python-based keyphrase extraction toolkit. It
provides an end-to-end keyphrase extraction pipeline in which each component can
be easily modified or extended to develop new models. pke
also allows for
easy benchmarking of state-of-the-art keyphrase extraction models, and
ships with supervised models trained on the SemEval-2010 dataset (http://aclweb.org/anthology/S10-1004).
Credits to https://github.com/boudinfl
Here is the "just copy and paste" installations script for Docker in Ubuntu. Enjoy.
# Install dependencies
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
# Add Docker repo
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
# Install Docker
sudo apt-get install -y docker-ce
# Start Docker Daemon
sudo service docker start
# Add yourself to the Docker user group, otherwise docker will complain that
# it does not know if the Docker Daemon is running
sudo usermod -aG docker ${USER}
# Install docker-compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
source ~/.bashrc
docker-compose --version
echo "Done!"