A Python feed reader library.

Overview

reader is a Python feed reader library.

It aims to allow writing feed reader applications without any business code, and without enforcing a dependency on a particular framework.

build status (GitHub Actions) code coverage documentation status PyPI status checked with mypy code style: black

reader allows you to:

  • retrieve, store, and manage Atom, RSS, and JSON feeds
  • mark entries as read or important
  • add tags and metadata to feeds
  • filter feeds and articles
  • full-text search articles
  • get statistics on feed and user activity
  • write plugins to extend its functionality
  • skip all the low level stuff and focus on what makes your feed reader different

...all these with:

  • a stable, clearly documented API
  • excellent test coverage
  • fully typed Python

What reader doesn't do:

  • provide an UI
  • provide a REST API (yet)
  • depend on a web framework
  • have an opinion of how/where you use it

The following exist, but are optional (and frankly, a bit unpolished):

  • a minimal web interface
    • that works even with text-only browsers
    • with automatic tag fixing for podcasts (MP3 enclosures)
  • a command-line interface

Documentation: reader.readthedocs.io

Usage:

$ pip install reader
>>> from reader import make_reader
>>>
>>> reader = make_reader('db.sqlite')
>>> reader.add_feed('http://www.hellointernet.fm/podcast?format=rss')
>>> reader.update_feeds()
>>>
>>> entries = list(reader.get_entries())
>>> [e.title for e in entries]
['H.I. #108: Project Cyclops', 'H.I. #107: One Year of Weird', ...]
>>>
>>> reader.mark_entry_as_read(entries[0])
>>>
>>> [e.title for e in reader.get_entries(read=False)]
['H.I. #107: One Year of Weird', 'H.I. #106: Water on Mars', ...]
>>> [e.title for e in reader.get_entries(read=True)]
['H.I. #108: Project Cyclops']
>>>
>>> reader.update_search()
>>>
>>> for e in list(reader.search_entries('year'))[:3]:
...     title = e.metadata.get('.title')
...     print(title.value, title.highlights)
...
H.I. #107: One Year of Weird (slice(15, 19, None),)
H.I. #52: 20,000 Years of Torment (slice(17, 22, None),)
H.I. #83: The Best Kind of Prison ()
Comments
  • Increasing

    Increasing "database is locked" errors during update

    Starting with 2020-04-15, there has been an increasing number of "database is locked" errors during update on my reader deployment (update every hour and update --new-only && search update every minute).

    Most errors happen at XX:01:0X, which I think indicates the hourly and minutely updates are clashing. It's likely that search update is hogging the database (since we know it has long-running transactions).

    I didn't see any metric changes on the host around -04-15.

    Logs.
    $ head -n1 /var/log/reader/update.log | cut -dT -f1
    2019-07-26
    $ cat locked.py 
    import sys
    
    last_ts = None
    
    for line in sys.stdin:
        if line.startswith('2020-'):
            last_ts, *_ = line.partition(' ')
        if line.startswith('reader.exceptions.StorageError: sqlite3 error: database is locked'):
            print(last_ts)
    
    $ cat /var/log/reader/update.log | python3 locked.py | cut -dT -f1 | uniq -c
          1 2020-04-15
          4 2020-04-16
          3 2020-04-17
          6 2020-04-18
          2 2020-04-19
          1 2020-04-20
          2 2020-04-21
          3 2020-04-22
          3 2020-04-23
          2 2020-04-24
          3 2020-04-25
          1 2020-04-26
          4 2020-04-27
          2 2020-04-28
          3 2020-04-30
          6 2020-05-01
          8 2020-05-02
         12 2020-05-03
          6 2020-05-04
          4 2020-05-05
          5 2020-05-06
          1 2020-05-07
          2 2020-05-08
          3 2020-05-09
          4 2020-05-10
          4 2020-05-11
          7 2020-05-12
          8 2020-05-13
          9 2020-05-14
          6 2020-05-15
          5 2020-05-16
          9 2020-05-17
         16 2020-05-18
          9 2020-05-19
         10 2020-05-20
         15 2020-05-21
         23 2020-05-22
         20 2020-05-23
         19 2020-05-24
         22 2020-05-25
         22 2020-05-26
         21 2020-05-27
         15 2020-05-28
         18 2020-05-29
         14 2020-05-30
         11 2020-05-31
         17 2020-06-01
         13 2020-06-02
         18 2020-06-03
         10 2020-06-04
         15 2020-06-05
         10 2020-06-06
         14 2020-06-07
         15 2020-06-08
         18 2020-06-09
         17 2020-06-10
         19 2020-06-11
         21 2020-06-12
         19 2020-06-13
         16 2020-06-14
         13 2020-06-15
         24 2020-06-16
         24 2020-06-17
         24 2020-06-18
         24 2020-06-19
         24 2020-06-20
         24 2020-06-21
         24 2020-06-22
         24 2020-06-23
         24 2020-06-24
         11 2020-06-25
    $ cat /var/log/reader/update.log | python3 locked.py | cut -dT -f2 | cut -d: -f2- | cut -c1-4 | sort | uniq -c | sort -rn | head
        510 01:0
        119 03:0
         76 01:1
         46 01:2
         30 00:5
         14 00:3
         13 00:0
          7 01:3
          7 00:4
          3 05:0
    

    I should check if there was a trigger that started this, or if the number of entries/feeds simply hit some threshold.

    Also it, would be nice to show the pid in the logs so I can see which of the commands is failing, and maybe intercepting the exception and showing some nicer error message.

    Things that are likely to improve this:

    • [helps] Enabling WAL (#169).
    • Using --workers 20 to give the hourly update a chance to finish before the second minute of the hour. Obviously, this isn't actually addressing the problem.
    • Increasing the timeout passed to sqlite3.connect (at the moment we're using the 5s default).
    • [doesn't help] Using a SQLite with HAVE_USLEEP (but this is not necessarily a reader issue; we could document it, though).
    • Adding retrying in reader. ಠ_ಠ, SQLite already has it built in, it's sucky because of the HAVE_USLEEP thing.
    • Wrap the whole update with a lock. ಠ_ಠ, idem.
    • Make the search update chunks more spaced out, to allow other stuff to happen.
    • Make search update not hog the database by not stripping HTML inside the transaction.
    bug core 
    opened by lemon24 15
  • Entry tags and metadata

    Entry tags and metadata

    Currently two bits of user data can be added to a feed entry (mark_as_read, mark_as_important).

    Possible use cases:

    1. Add user notes about the entry
    2. Add tags for entry, add entry to 'saved items'
    3. In case of podcasts, download info, (i.e path to the file, if successfully downloaded or how many times tried to download)

    Optionally include user_data to search (as an argument to make_reader).

    help wanted wontfix API core 
    opened by balki 12
  • SQLite objects created in a thread can only be used in that same thread.

    SQLite objects created in a thread can only be used in that same thread.

    Hi @lemon24

    Thanks for making this library.

    I am attempting to utilise it for a Telegram bot I am working on. However, I run into the following error:

    SQLite objects created in a thread can only be used in that same thread.
    

    Here is my code 😃

    Doing some quick Google searching I might need to do something similar to what is recommended in this stack overflow post.

    However, I cannot see a way I can provide this to the library currently.

    Hope you can help. Thanks and a merry Christmas to you! 🎄

    opened by dbrennand 12
  • Feed decommissioned

    Feed decommissioned

    Like #149, but there's no replacement.

    Now, to make the feed stop updating, I can delete it, but I lose the entries.

    Possible ways of keeping the entries:

    • Have a way to mark a feed as "broken / don't update anymore" (obviously, this could be temporary).
    • If we can mark an entire feed as important (https://github.com/lemon24/reader/issues/96#issuecomment-628520935), and have an Archived feed where important entries of deleted feeds go (https://github.com/lemon24/reader/issues/96#issuecomment-460077441), marking the feed as important and then deleting it would preserve the entries.
    API core 
    opened by lemon24 11
  • [Question] Public API for recently fetched entries, or entries added/modified since date?

    [Question] Public API for recently fetched entries, or entries added/modified since date?

    For app I'm working on, I wish to update some feeds and do something on entries that appeared. But I'm unsure how to implement this last part and get only new entries.

    Workflow I sketched so far is:

    1. Disable updates for all feeds in db (because there might be some I don't wish to update)
    2. Start adding feeds to db 2.1. If feed already exists, enable it for updates 2.2. If feed doesn't exist, it will be added and enabled for updates automatically
    3. Update feeds through update_feeds or update_feeds_iter
    4. ???

    update_feeds doesn't return anything. update_feeds_iter gives me back list of UpdateResult, where each item will have url and counts or exception.

    So, I think I can sum all the counts and ask for that many entries. Something like:

    count = sum(
        sum(result.value.new, result.value.updated)
        for result in results
        if not isinstance(result.value, ReaderError)
    )
    new_entries = reader.get_entries(limit=count)
    

    But is it guaranteed that get_entries(sort='recent') will include recently updated entry? Even if that entry originally appeared long time ago? I might be misunderstanding what it means for entry to be marked as "updated", so any pointer on that would be helpful, too.

    Perhaps I could change my workflow a little - first get monotonic timestamp, then run all the steps, and finally ask for all entries that were added or modified after timestamp. But it seems that there is no API for searching by date? search_entries is designed for full-text search and works only on few columns.

    So, my question is:

    1. What is the preferred way of obtaining all entries added in update call? Counting and using get_entries? Calling get_entries for everything and discarding all results that were added / modified before timestamp? Something else?
    2. What does it mean that entry was "updated"?
    opened by mirekdlugosz 10
  • Twitter support

    Twitter support

    Some notes:

    • Main use case: get updates on someone's tweets, e.g. https://twitter.com/qntm; maybe replies too.
      • Account / API key not required (it kinda defeats the purpose).
    • From 20 minutes of research, snscrape seems to be working (other popular ones seem broken).
      • The lib part is not stable, but usable.
        • We can use our own Requests session (by setting a private attribute).
    • This won't really fit with the retriever/parser model we have now.
      • #222 has the same issue, converge.
      • We can use the date of the last tweet as Last-Modified.
        • We need a limit the first time (scraping is paginated, if we go all the way to the beginning of an account it'll take ages).
    • Should model the URLs on Twitter's (https://twitter.com/$user, https://twitter.com/$user/with_replies, etc.).
    • Presentation matters:
      • Threads should be shown in a sane way.
      • Media should be inlined (e.g. a link to an image should be shown as an <img ...>).
      • Titles should work with the dedupe plugin (likely, no title).
    plugin 
    opened by lemon24 9
  • Handle non-http feeds gracefully in the web app

    Handle non-http feeds gracefully in the web app

    ...or don't handle them at all.

    Adding feed /dev/zero kills the web app. People using Reader directly are free to shoot themselves in the foot however they want; the app should not allow them to, especially if it's not their foot they're shooting.

    Update: Oh look, we already have a TODO for this: https://github.com/lemon24/reader/blob/165a0af5f3510dd64fc4c75de17e9c5f45f25c06/src/reader/core/parser.py#L155

    API web app core 
    opened by lemon24 9
  • Some feeds have duplicate entries

    Some feeds have duplicate entries

    Some entries have duplicate entries, or their ids change (resulting in an entry being stored twice).

    E.g., a feed that had the entry id format updated:

    $ sqlite3 db.sqlite 'select feed, id, updated, title from entries where title like "RE: xkcd%"' -line
       feed = http://sealedabstract.com/feed/
         id = http://sealedabstract.com/?p=2494
    updated = 2014-09-09 08:30:25
      title = RE: xkcd #1357 free speech
    
       feed = http://sealedabstract.com/feed/
         id = /?p=2494
    updated = 2014-09-09 07:30:25
      title = RE: xkcd #1357 free speech
    

    If possible, only one should be shown (similar to #78).

    API web app 
    opened by lemon24 9
  • CLI options must be passed all the time

    CLI options must be passed all the time

    opened by lemon24 8
  • I don't know what's happening during update_feeds()

    I don't know what's happening during update_feeds()

    I don't know what's happening during update_feeds() until it finishes. More so, there's no straightforward way to know programmatically which feeds failed (logging or guessing from feed.last_exception don't count).

    From https://clig.dev/#robustness-guidelines:

    Responsive is more important than fast. Print something to the user in <100ms. If you’re making a network request, print something before you do it so it doesn’t hang and look broken.

    Show progress if something takes a long time. If your program displays no output for a while, it will look broken. A good spinner or progress indicator can make a program appear to be faster than it is.

    Doing either of these is hard to do at the moment.

    API core 
    opened by lemon24 7
  • REST API

    REST API

    Have you thought about providing a REST API returning feed data as JSON? This would help implementing other UI interfaces. I will probably experiment with that and if your are interested provide a PR.

    help wanted wontfix 
    opened by clemera 7
  • Gemini subscription support

    Gemini subscription support

    https://gemini.circumlunar.space/docs/companion/subscription.gmi

    Prerequisites:

    • a way to fetch gemini:// URLs
      • https://pypi.org/project/aiogemini/
      • https://pypi.org/project/gemurl/
      • https://github.com/kr1sp1n/awesome-gemini#programming
        • https://github.com/cbrews/ignition
        • https://framagit.org/bortzmeyer/agunua
        • https://tildegit.org/solderpunk/AV-98 ("canonical" CLI client)
        • https://tildegit.org/solderpunk/CAPCOM ("canonical" CLI feed reader)
    • figure out how to handle TOFU
    • a way to render GMI to HTML (re-check awesome-gemini above); only if we fetch linked entries

    This might be a good way to explore how a plugin (PyPI) package would work.

    opened by lemon24 0
  • Sort by recently interacted with

    Sort by recently interacted with

    It would be nice to get entries the user recently interacted with.

    "Interacted" means, at a minimum, marked as (un)read/important. "Set tag" might be nice. "Downloaded enclosure" might be nice too.

    Arguably, the mark_as_read plugin (and plugins in general) should not count as an interaction. If we use read_/important_modified, mark_as_read should probably set them to None – but this would likely break the "don't care" tri-state (https://github.com/lemon24/reader/issues/254#issuecomment-938146589).

    enhancement core 
    opened by lemon24 0
  • 4.0 backwards compatibility breaks

    4.0 backwards compatibility breaks

    This is to track all the backwards compatibility breaks we want to do in 4.0.

    Things that require deprecation warnings pre-4.0:

    • ...

    Things that do not require / can't (easily) have deprecation warnings pre-4.0:

    • [ ] make most public dataclass fields KW_ONLY (after we drop Python 3.9)
    API core 
    opened by lemon24 0
  • Deleting a feed deletes its important entries

    Deleting a feed deletes its important entries

    They should likely be moved to an "Archived" feed (mentioned in https://github.com/lemon24/reader/issues/96#issuecomment-460077441).

    • Old entries in the archived feed should not be deleted (by #96); if they remain important, they won't.
    • The entry source (#276) and original_feed_url should be set to the to-be-deleted feed (if not already set).
    core plugin 
    opened by lemon24 0
  • Typing cleanup

    Typing cleanup

    Clean typing stuff. Might be able to use pyupgrade for this.

    To do:

    • [x] from __future__ import annotations (and the changes it enables)
    • [ ] don't depend on typing_extensions at runtime (example)
    • [ ] typing.Self (supported by mypy master, but not by 0.991)
    • [ ] show types for data objects in docs
    • [ ] ~~move TagFilter, {Feed,Entry}Filter options to _storage (todo)~~ no, used by _search as well, just remove todo
    • [ ] make DEFAULT_RESERVED_NAME_SCHEME public
    • [ ] move reader.core.*Hook to types

    To not do:

    • Consolidate (public) aliases under reader.typing (Flask does it)?
      • No, reader.types is likely good enough (but maybe consolidate them in one place in the file).
    • import typing as t, import typing_extensions as te (Flask does it)?
      • Fewer imports, but makes code look kinda ugly.
      • Also,

        When adding types, the convention is to import types using the form from typing import Union (as opposed to doing just import typing or import typing as t or from typing import *).

    opened by lemon24 0
Releases(3.3)
Do you need a screensaver for CircuitPython? Of course you do

circuitpython_screensaver Do you need a screensaver for CircuitPython? Of course you do Demo video of dvdlogo screensaver: screensaver_dvdlogo.mp4 Dem

Tod E. Kurt 8 Sep 02, 2021
Python module used to generate random facts

Randfacts is a python library that generates random facts. You can use randfacts.get_fact() to return a random fun fact. Disclaimer: Facts are not gua

Tabulate 14 Dec 14, 2022
Python programming language Test

Exercise You are tasked with creating a data-processing app that pre-processes and enriches the data coming from crawlers, with the following requirem

Monirul Islam Khan 1 Dec 13, 2021
Pygments is a generic syntax highlighter written in Python

Welcome to Pygments This is the source of Pygments. It is a generic syntax highlighter written in Python that supports over 500 languages and text for

1.2k Jan 06, 2023
An easy way to access to your EPITECH subjects based on the Roslyn's database.

An easy way to access to your EPITECH subjects based on the Roslyn's database.

Mathias 1 Feb 09, 2022
The first Python 1v1.lol triggerbot working with colors !

1v1.lol TriggerBot Afin d'utiliser mon triggerbot, vous devez activer le plein écran sur 1v1.lol sur votre naviguateur (quelque-soit ce dernier). Vous

Venax 5 Jul 25, 2022
A light library to build tiny websites

A light library to build tiny websites

BT.Q 1 Dec 23, 2021
A frontend to ease the use of pulseaudio's routing capabilities, mimicking voicemeeter's workflow

Pulsemeeter A frontend to ease the use of pulseaudio's routing capabilities, mimicking voicemeeter's workflow Features Create virtual inputs and outpu

Gabriel Carneiro 164 Jan 04, 2023
Procedural 3D data generation pipeline for architecture

Synthetic Dataset Generator Authors: Stanislava Fedorova Alberto Tono Meher Shashwat Nigam Jiayao Zhang Amirhossein Ahmadnia Cecilia bolognesi Dominik

Computational Design Institute 49 Nov 25, 2022
Advanced Developing of Python Apps Final Exercise

Advanced-Developing-of-Python-Apps-Final-Exercise This is an exercise that I did for a python advanced learning course. The exercise is divided into t

Alejandro Méndez Fernández 1 Dec 04, 2021
The-White-Noise-Project - The project creates noise intentionally

The-White-Noise-Project High quality audio matters everywhere, even in noise. Be

Ali Hakim Taşkıran 1 Jan 02, 2022
Participants of Bertelsmann Technology Scholarship created an awesome list of resources and they want to share it with the world, if you find illegal resources please report to us and we will remove.

Participants of Bertelsmann Technology Scholarship created an awesome list of resources and they want to share it with the world, if you find illegal

Wissem Marzouki 29 Nov 28, 2022
Low-level Python CFFI Bindings for Argon2

Low-level Python CFFI Bindings for Argon2 argon2-cffi-bindings provides low-level CFFI bindings to the Argon2 password hashing algorithm including a v

Hynek Schlawack 4 Dec 15, 2022
Usando Multi Player Perceptron e Regressão Logistica para classificação de SPAM

Relatório dos procedimentos executados e resultados obtidos. Objetivos Treinar um modelo para classificação de SPAM usando o dataset train_data. Class

André Mediote 1 Feb 02, 2022
A faster Python generator that get function results from multi-process workers

multiyield This package implements a Python generator that get function results from multi-process workers. The faster_fifo Queue (instead of the stan

Xin Du 1 Nov 18, 2021
Let's make a lot of random function from Scracth...

Pseudo-Random On a whim I asked myself the question about how randomness is integrated into an algorithm? So I started the adventure by trying to code

Yacine 2 Jan 19, 2022
This is a spamming selfbot that has custom spammed message and @everyone spam.

This is a spamming selfbot that has custom spammed message and @everyone spam.

astro1212 1 Jul 31, 2022
Stop python warnings, no matter what!

SHUTUP - Stop python warnings, no matter what! Sometimes you just can't mute python warnings. Use this library to solve this. Installation pip install

80 Jan 04, 2023
A full-featured, hackable tiling window manager written and configured in Python

A full-featured, hackable tiling window manager written and configured in Python Features Simple, small and extensible. It's easy to write your own la

Qtile 3.8k Dec 31, 2022
General tricks that may help you find bad, or noisy, labels in your dataset

doubtlab A lab for bad labels. Warning still in progress. This repository contains general tricks that may help you find bad, or noisy, labels in your

vincent d warmerdam 449 Dec 26, 2022