Skip to content

Commit

Permalink
Change legacy backend related + parser docs (#3419)
Browse files Browse the repository at this point in the history
* Fix legacy backend/parser related docs
  • Loading branch information
TueeNguyen authored Apr 22, 2022
1 parent 9f1a049 commit 0af6725
Show file tree
Hide file tree
Showing 6 changed files with 27 additions and 27 deletions.
18 changes: 10 additions & 8 deletions src/api/parser/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
# Parser Service: (To be updated when Parser service is dockerized and live)
# Parser Service:

The Parser service parses posts from user's feeds to populate Redis
The current system uses the parser service in order to run the feed parser and feed queue, see [`./data/feed.js`](https://github.com/Seneca-CDOT/telescope/blob/master/src/api/parser/src/data/feed.js). The blog feeds are stored into a supabase database. They are fetched and loaded into a [queue](https://github.com/Seneca-CDOT/telescope/blob/master/src/api/parser/src/lib/queue.js) to create [Feed](https://github.com/Seneca-CDOT/telescope/blob/master/src/api/parser/src/data/feed.js) and [Post](https://github.com/Seneca-CDOT/telescope/blob/master/src/api/parser/src/data/post.js) objects so it can be stored into `Redis` (cache) and `Elasticsearch` (indexing) database. Afterwards, various microservices can use them to request data.

Telescope's data model is built on Feeds and Posts. A feed represents an RSS/Atom feed, and includes metadata about a particular blog (e.g., URL, author, etc) as well as URLs to individual Posts. A Post includes metadata about a particular blog post (e.g., URL, date created, date updated, etc).

To run the service, use command `pnpm services:start parser` or `pnpm services:start` or `pnpm dev` in `src/api/parser`. When it runs, the logs show information about feeds being parsed in real-time, which continues forever.

The parser get all the feed urls and authors from Supabase database, parses them, creates `Feed` objects and puts them into a queue managed by [Bull](https://github.com/OptimalBits/bull) and backed by `Redis`. These are then processed in [`./src/feed/processor.js`](https://github.com/Seneca-CDOT/telescope/blob/master/src/api/parser/src/feed/processor.js) in order to download the individual Posts, which are also cached in Redis.

## Install

Expand All @@ -10,7 +16,7 @@ pnpm install

## Usage

### Normal mode
### Docker mode

```
pnpm start
Expand All @@ -22,8 +28,4 @@ pnpm start
pnpm dev
```

By default the server is running on <http://localhost:10000/>.

### Examples

## Docker
By default the server is running on http://localhost:10000/.
13 changes: 13 additions & 0 deletions src/web/docusaurus/docs/api-services/parser.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
sidebar_position: 8
---

# Parser Service:

The current system uses the parser service in order to run the feed parser and feed queue, see [`./data/feed.js`](https://github.com/Seneca-CDOT/telescope/blob/master/src/api/parser/src/data/feed.js). The blog feeds are stored into a supabase database. They are fetched and loaded into a [queue](https://github.com/Seneca-CDOT/telescope/blob/master/src/api/parser/src/lib/queue.js) to create [Feed](https://github.com/Seneca-CDOT/telescope/blob/master/src/api/parser/src/data/feed.js) and [Post](https://github.com/Seneca-CDOT/telescope/blob/master/src/api/parser/src/data/post.js) objects so it can be stored into `Redis` (cache) and `Elasticsearch` (indexing) database. Afterwards, various microservices can use them to request data.

Telescope's data model is built on Feeds and Posts. A feed represents an RSS/Atom feed, and includes metadata about a particular blog (e.g., URL, author, etc) as well as URLs to individual Posts. A Post includes metadata about a particular blog post (e.g., URL, date created, date updated, etc).

To run the service, use command `pnpm services:start parser` or `pnpm services:start` or `pnpm dev` in `src/api/parser`. When it runs, the logs show information about feeds being parsed in real-time, which continues forever.

The parser get all the feed urls and authors from Supabase database, parses them, creates `Feed` objects and puts them into a queue managed by [Bull](https://github.com/OptimalBits/bull) and backed by `Redis`. These are then processed in [`src/api/parser/feed/processor.js`](https://github.com/Seneca-CDOT/telescope/blob/master/src/api/parser/src/feed/processor.js) in order to download the individual Posts, which are also cached in Redis.
16 changes: 2 additions & 14 deletions src/web/docusaurus/docs/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,19 +27,7 @@ The following gives an overview of the current (i.e. 2.4.0) design of Telescope.

### Legacy Monolithic Back-end (1.0)

Telescope's back-end began as a single, monolithic node.js app. During the 2.0 release, much of the back-end was split into separate microservices (see below). However, parts of the legacy back-end code are still in use, see `src/backend/*`.

The current system uses the legacy backend in order to run the feed parser and feed queue, see `src/backend/feed/*`. The processed feeds and posts are then stored in Redis (cache) and Elasticsearch (indexing) databases, and various microservices use these in order to get their data.

Telescope's data model is built on Feeds and Posts. A feed represents an RSS/Atom feed, and includes metadata about a particular blog (e.g., URL, author, etc) as well as URLs to individual Posts. A Post includes metadata about a particular blog post (e.g., URL, date created, date updated, etc).

The legacy back-end is started using `pnpm start` in the root of the Telescope monorepo, and it (currently) must be run alongside the microservices. When it runs, the logs show information about feeds being parsed in real-time, which continues forever.

The parser downloads the [CDOT Feed List](https://wiki.cdot.senecacollege.ca/wiki/Planet_CDOT_Feed_List#Feeds), parses it, creates `Feed` objects and puts them into a queue managed by [Bull](https://github.com/OptimalBits/bull) and backed by Redis. These are then processed in `src/backend/feed/processor.js` in order to download the individual Posts, which are also cached in Redis.

There is code duplication between the current back-end and the Parser microservice (see `src/api/parser`), and anyone changing the back-end will also need to update the Parser service at the same time (for now). One of the 3.0 goals is to [remove the back-end and move all of this logic to the Parser service](https://github.com/Seneca-CDOT/telescope/issues?q=is%3Aissue+is%3Aopen+parser+service).

In production, the legacy back-end is deployed as a container named `telescope` (see `docker/production.yml`), and its Dockerfile lives in the root at `./Dockerfile`.
Telescope's back-end began as a single, monolithic node.js app. During the 2.0 release, much of the back-end was split into separate microservices (see below). However, parts of the legacy back-end code were still in use (see [`src/backend`](https://github.com/Seneca-CDOT/telescope/tree/d780d630abdd903b55a2a645b0f98ee96554e434/src/backend)) but they were eventually replaced by the `parser` service during 3.0 release.

### Back-end Microservices (2.0)

Expand All @@ -52,7 +40,7 @@ The legacy back-end has been split into a series of microservices. Each microser
- Posts Service (`src/api/posts`) - API for accessing Post and Feed data in Redis (probably not well named at this point)
- Search Service (`src/api/search`) - API for doing searches against Elasticsearch
- Status Service (`src/api/status`) - API for accessing Telescope status information, as well as providing the Dashboards
- Parser Service (`src/api/parser`) - feed and post parsing. Currently disabled, see <https://github.com/Seneca-CDOT/telescope/issues/2111>
- Parser Service (`src/api/parser`) - feed and post parsing which was disabled in 2.0, but now enabled in 3.0 release.

All microservices are built on a common foundation, the [Satellite module](https://github.com/Seneca-CDOT/telescope/tree/master/src/satellite). Satellite provides a common set of features for building Express-based microservices, with proper logging, health checks, headers, authorization middleware, as well as connections to Redis and Elasticsearch. It saves us having to manage the same set of dependencies a dozen times, and repeat the same boilerplate code.

Expand Down
1 change: 0 additions & 1 deletion src/web/docusaurus/docs/contributing/debugging.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@ Now that we know how to launch the server. We will look at different launching o

![VS Launch Options Screenshot](../../static/img/VS-Launch-Options.png)

1. Launch Telescope -> Launches index.js and tries to run the backend.
1. Launch Auto Deployment -> Launches autodeployment found in tools.
1. Launch All Tests -> Runs all the tests in the tests folder.
1. Launch Opened Test File -> Will run a test that is currently opened in a VS code tab.
Expand Down
2 changes: 0 additions & 2 deletions src/web/docusaurus/docs/getting-started/environment-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,8 +260,6 @@ This is the default setting, you do not need to copy or modify any `env` file.

```bash
pnpm services:start

pnpm start
```

Then visit `localhost:8000` in a web browser to see Telescope running locally. `localhost:3000/posts` will show you the list of posts in JSON
Expand Down
4 changes: 2 additions & 2 deletions src/web/docusaurus/docs/tools-and-technologies/pino.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@ sidebar_position: 5

# Logging Support Using Pino

This project uses [Pino](http://getpino.io/#/) to provide support for logging in Production as well as development environments. The [logger.js](https://github.com/Seneca-CDOT/telescope/blob/master/src/backend/utils/logger.js) module exports a logger instance that can be used in other modules to implement logging for important events.
This project uses [Pino](http://getpino.io/#/) to provide support for logging in Production as well as development environments. `Satellite` exports a [`logger`](https://github.com/Seneca-CDOT/telescope/blob/master/src/satellite/src/logger.js) instance that can be used in other modules to implement logging for important events.

## How to use the logger

```javascript
const { logger } = require('../src/backend/utils/logger');
const { logger } = require('@senecacdot/satellite');

logger.info('Important information...');
logger.trace('Information About Trace');
Expand Down

0 comments on commit 0af6725

Please sign in to comment.