The Singleton Pattern ensures a class has only one instance throughout a program and provides a global access point. It is commonly used for managing shared resources like databases, logging systems or file managers.
This example shows how a class creates only one object and returns the same object every time it is called.
class Single:
instance = None
def __new__(cls):
if cls.instance is None:
cls.instance = super().__new__(cls)
return cls.instance
a = Single()
b = Single()
print(a is b)
Output
True
Explanation:
- class Single: Defines a class named Single and instance = None: Class variable to store the single object.
- def __new__(cls): Controls object creation and if cls.instance is None: Checks if object already exists.
- cls.instance = super().__new__(cls): Creates object only once whereas return cls.instance: Always returns the same object.
- a = Single(): First call creates the instance and b = Single(): Second call reuses the same instance.
- print(a is b): Confirms both variables point to the same object (True).
Methods to Implement Singleton Pattern
1. Module-level Singleton
All Python modules are singletons by default. Any variables or functions defined in a module are shared across imports. For Example: Below example, three files, Singleton.py, samplemodule1.py and samplemodule2.py share a variable from Singleton.py
# Singleton.py
var = "Shared Variable"
# samplemodule1.py
import singleton
print(singleton.var)
singleton.var += "(modified by samplemodule1)"
# samplemodule2.py
import singleton
print(singleton.var)
Output
Here, the value changed by samplemodule1 is also reflected in samplemodule2.

Explanation:
- samplemodule1 modifies singleton.var.
- samplemodule2 reflects the updated value because the module itself is a singleton.
2. Classic Singleton
Classic Singleton creates an instance only if it does not exist. Otherwise, it returns the already created instance.
class Singleton:
def __new__(cls):
if not hasattr(cls, 'inst'):
cls.inst = super().__new__(cls)
return cls.inst
s1 = Singleton()
s2 = Singleton()
print(s1 is s2)
s1.val = "Singleton Variable"
print(s2.val)
Output
True Singleton Variable
Explanation:
- if not hasattr(cls, 'inst'): Checks if an instance already exists.
- cls.inst = super().__new__(cls): Creates a new instance if none exists.
- return cls.inst: Always returns the same instance.
- s1 = Singleton() and s2 = Singleton(): Both variables refer to the same instance.
Subclass Example: Let's check what happens when we subclass a singleton class.
class SingletonChild(Singleton):
pass
child = SingletonChild()
print(child is s1)
print(child.val)
Output
False
Singleton Variable
Explanation:
- SingletonChild inherits from SingletonClass.
- child is an instance of SingletonChild.
- Since SingletonClass is a singleton, child shares the same instance as singleton.
- Accessing child.singl_variable returns the same value as singleton.singl_variable.
 3. Borg Singleton
Borg Singleton allows different instances to share the same state.
class Borg:
_shared = {}
def __new__(cls, *args, **kwargs):
obj = super().__new__(cls, *args, **kwargs)
obj.__dict__ = cls._shared
return obj
b1 = Borg()
b1.val = "Shared Value"
class C(Borg):
pass
c1 = C()
print(c1 is b1)
print(c1.val)
Output
False Shared Variable
Explanation:
- _shared: Shared state dictionary for all instances.
- __new__ Ensures all instances share _shared state.
- b1 and c1: Different instances but share the same data.
- c1 is b1: False, instances are distinct.
- c1.val: Accesses the shared state from b1.
Resetting Shared State
Borg Singletons share state across instances. Resetting allows creating a fresh instance with an independent state while keeping the Borg structure.
class NB(Borg):
_shared = {}
nb1 = NB()
print(nb1.val)
Output
Resetting the shared state removes previous attributes. Accessing val now causes an AttributeError.
Traceback (most recent call last):
File "example.py", line 12, in <module>
print(nb1.val)
AttributeError: 'NB' object has no attribute 'val'
Web Crawler Using Classic Singleton
This example uses the Classic Singleton pattern to build a simple multi-threaded web crawler. A single shared crawler instance stores the URL queue, visited pages, and downloaded images, while multiple threads access the same data to crawl pages and download images without duplication.
import httplib2
import os
import re
import threading
import urllib
import urllib.request
from urllib.parse import urlparse, urljoin
from bs4 import BeautifulSoup
class CrawlerSingleton(object):
def __new__(cls):
if not hasattr(cls, 'instance'):
cls.instance = super(CrawlerSingleton, cls).__new__(cls)
return cls.instance
def navigate_site(max_links = 5):
parser_crawlersingleton = CrawlerSingleton()
while parser_crawlersingleton.url_queue:
if len(parser_crawlersingleton.visited_url) == max_links:
return
url = parser_crawlersingleton.url_queue.pop()
http = httplib2.Http()
try:
status, response = http.request(url)
except Exception:
continue
parser_crawlersingleton.visited_url.add(url)
print(url)
bs = BeautifulSoup(response, "html.parser")
for link in BeautifulSoup.findAll(bs, 'a'):
link_url = link.get('href')
if not link_url:
continue
parsed = urlparse(link_url)
if parsed.netloc and parsed.netloc != parsed_url.netloc:
continue
scheme = parsed_url.scheme
netloc = parsed.netloc or parsed_url.netloc
path = parsed.path
link_url = scheme +'://' +netloc + path
if link_url in parser_crawlersingleton.visited_url:
continue
parser_crawlersingleton.url_queue = [link_url] +\
parser_crawlersingleton.url_queue
class ParallelDownloader(threading.Thread):
def __init__(self, thread_id, name, counter):
threading.Thread.__init__(self)
self.name = name
def run(self):
print('Starting thread', self.name)
download_images(self.name)
print('Finished thread', self.name)
def download_images(thread_name):
singleton = CrawlerSingleton()
while singleton.visited_url:
url = singleton.visited_url.pop()
http = httplib2.Http()
print(thread_name, 'Downloading images from', url)
try:
status, response = http.request(url)
except Exception:
continue
bs = BeautifulSoup(response, "html.parser")
images = BeautifulSoup.findAll(bs, 'img')
for image in images:
src = image.get('src')
src = urljoin(url, src)
basename = os.path.basename(src)
print('basename:', basename)
if basename != '':
if src not in singleton.image_downloaded:
singleton.image_downloaded.add(src)
print('Downloading', src)
urllib.request.urlretrieve(src, os.path.join('images', basename))
print(thread_name, 'finished downloading images from', url)
def main():
crwSingltn = CrawlerSingleton()
crwSingltn.url_queue = [main_url]
crwSingltn.visited_url = set()
crwSingltn.image_downloaded = set()
navigate_site()
if not os.path.exists('images'):
os.makedirs('images')
thread1 = ParallelDownloader(1, "Thread-1", 1)
thread2 = ParallelDownloader(2, "Thread-2", 2)
thread1.start()
thread2.start()
if __name__ == "__main__":
main_url = ("https://www.geeksforgeeks.org/")
parsed_url = urlparse(main_url)
main()
Output

Let's look into the downloaded images and python shell output.

Explanation:
- CrawlerSingleton ensures only one crawler object is shared across the program.
- __new__() creates the instance once and always returns the same object.
- Shared data: url_queue -> URLs to crawl, visited_url -> already visited pages and image_downloaded -> avoids duplicate downloads
- navigate_site(): takes a URL from url_queue, downloads page using httplib2, parses HTML with BeautifulSoup and extracts internal links and adds them back to queue
- ParallelDownloader runs threads that call download_images() using the same Singleton data.
- download_images(): gets pages from visited_url, extracts image URLs and downloads only new images
- In main(), crawler + threads use the same Singleton instance, so all data is shared and duplication is prevented.