Skip to content

Look into managing Webcrawler memory usage #3

@schippas

Description

@schippas

While crawling through a large number of links, it may be possible for the Webcrawler to use up all of a machine's memory or slow down as it uses more memory. This could be fixed by finding a way to check memory usage on the machine, and potentially allow users to set memory usage limits. When these limits are hit, the Webcrawler should write its data to the database, clear its memory usage, and start from where it has left off. There may be other ways of preventing this as well.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions