Nameko - Latest posts https://discourse.nameko.io Latest posts Kontiki, a microservices framework in Python Hello,

I’ve just released Kontiki, a Python microservices framework built on AMQP and asyncio.It is inspired by Nameko’s design principles, which I found very elegant, and aims to provide a coherent, production-usable developer experience around the same message-driven approach.

Sharing it here in case it is useful to others.

]]>
https://discourse.nameko.io/t/kontiki-a-microservices-framework-in-python/820#post_1 Thu, 19 Mar 2026 19:27:18 +0000 discourse.nameko.io-post-2031
Girolle a Rust rpc lib Hi,

I start to learn Rust on my freetime since 2 years. And one of my pet project is a rust lib compliant with certain feature of Nameko, really inspire from rusty-celery. This project is Girolle. Girolle is far from been production proof but i already got win cases at work with it.
Thanks to all the contrubutors of Nameko.

Cheers.

]]>
https://discourse.nameko.io/t/girolle-a-rust-rpc-lib/807#post_1 Sat, 29 Jun 2024 09:20:07 +0000 discourse.nameko.io-post-2017
How to handle external websockets? hi @No_Split_Sherlock,

I know it’s rather old topic, but out of curiosity have you finally setup a websocket client as dependency provider?

]]>
https://discourse.nameko.io/t/how-to-handle-external-websockets/352#post_4 Mon, 10 Jun 2024 13:06:01 +0000 discourse.nameko.io-post-2016
Implement reverse proxy in nameko http route Hi.
I want to implement a reverse proxy in the Nameko framework but I don’t know how I can do it.
I did it in the FastApi framework:

@app.get("/test")
async def proxy(request: Request):
    try:
        os_headers = []
       
        HTTP_SERVER = AsyncClient(base_url="http://127.0.0.1:5000/login",timeout=1000.0)
        url = httpx.URL(path='', query=request.url.query.encode("utf-8"))
        rp_req = HTTP_SERVER.build_request(
            request.method, url, headers=os_headers, content=await request.body()
        )
        rp_resp = await HTTP_SERVER.send(rp_req, stream=True)
        return StreamingResponse(
            rp_resp.aiter_raw(),
            status_code=rp_resp.status_code,
            headers=rp_resp.headers,
            background=BackgroundTask(rp_resp.aclose),
        )
    except Exception as e :
        print( e)

How can I do the same thing in Nameko?

]]>
https://discourse.nameko.io/t/implement-reverse-proxy-in-nameko-http-route/797#post_1 Sat, 01 Jul 2023 09:13:42 +0000 discourse.nameko.io-post-2006
Are concurrently running methods the same as workers? As the default max_workers is 10,Does this mean that a service class method can execute up to 10 methods at once?

Since in my case, I used “nameko run” to up a service class and what I observed that only one method was in use at a time, after I called many rpc at the same time. It seems to work like a normal queue. My attempt to update the max_workers parameter appears to have had no effect.

Thank !

]]>
https://discourse.nameko.io/t/are-concurrently-running-methods-the-same-as-workers/793#post_1 Tue, 02 May 2023 00:28:37 +0000 discourse.nameko.io-post-2002
Is it possible to dispatch event asynchronously without waiting? I’m glad that applying the monkey patch solved your problem.

Nameko can run multiple services inside the same runner without any problem. It’s often helpful to do this, for example when testing multiple services interoperating, but in production I recommend using a runner per service. That makes it easier to reason about things, and you generally get more capacity because (on a multi-core host) each service runner will potentially occupy its own core.

]]>
https://discourse.nameko.io/t/is-it-possible-to-dispatch-event-asynchronously-without-waiting/792#post_6 Sat, 22 Apr 2023 19:47:04 +0000 discourse.nameko.io-post-2001
Is it possible to dispatch event asynchronously without waiting? Hi @mattbennett thank for your response.
I have applied the eventlet monkey patch in my project.

And I found that, it because I run rpc and event handler service in ServiceRunner. It turn woking well asynchronously when I run service Individually. I got the same behavior when I tested with the nameko shell to run services. So mostly we don’t run rpc and event handler service in the same ServiceRunner?

]]>
https://discourse.nameko.io/t/is-it-possible-to-dispatch-event-asynchronously-without-waiting/792#post_5 Thu, 20 Apr 2023 16:39:04 +0000 discourse.nameko.io-post-2000
Is it possible to dispatch event asynchronously without waiting? You’re right to be confused. The “dispatch duration” printed should be milliseconds.

How are you running these services? It looks to me like you have not applied the eventlet monkey patch.

]]>
https://discourse.nameko.io/t/is-it-possible-to-dispatch-event-asynchronously-without-waiting/792#post_4 Mon, 20 Mar 2023 21:25:04 +0000 discourse.nameko.io-post-1999
Is it possible to dispatch event asynchronously without waiting? Hi mattbennett, Thank for your response.

My requerment is just want to dispatch the event handler without waiting, but in my case after dispatching the event_handler the EventDispatcher is keep waiting until the event_handler done process. Is ACK will respond back after the process done? or is there any configuration that I missed ?

I have attached some references to my case, as you can see there is a sleep for 20s in test event_handler. When I dispatched this event, I expected it to take less than 20 seconds, but it almost never did. As I noted, the dispatcher will complete after the event_handler done its operation.

Event dispatching service

Event listening service

Reuslt

Addition
I expected this event dispatcher to work, such as when I wrap event dispatcher in thread, since it worked when I use eventlet.spawn(event dispatcher).

]]>
https://discourse.nameko.io/t/is-it-possible-to-dispatch-event-asynchronously-without-waiting/792#post_3 Thu, 16 Mar 2023 04:08:25 +0000 discourse.nameko.io-post-1998
Is it possible to dispatch event asynchronously without waiting? Can you explain in more detail what you’re trying to achieve?

The event_handler entrypoint is async by default – there is no client waiting for an answer. The only thing that waits for the method to finish is the ACK of the event message. It is implemented this way to make sure that, if the worker is interrupted somehow, the message is requeued for another handler to pick up.

]]>
https://discourse.nameko.io/t/is-it-possible-to-dispatch-event-asynchronously-without-waiting/792#post_2 Wed, 15 Mar 2023 13:55:53 +0000 discourse.nameko.io-post-1997
Is it possible to dispatch event asynchronously without waiting? One of my event listening task (event_handler) is taking long time to processing. So is there any way to emit the event and let it work as async function without await? Thank !!

]]>
https://discourse.nameko.io/t/is-it-possible-to-dispatch-event-asynchronously-without-waiting/792#post_1 Tue, 14 Mar 2023 16:54:55 +0000 discourse.nameko.io-post-1996
How different nameko run proccesses share one listen socket? I did a tiny experiment here and verified that this behaviour is all to do with eventlet’s .listen() implementation.

You can run the following in two identical processes and they’ll both happily listen:

import eventlet
eventlet.monkey_patch()

listen_sock = eventlet.listen(("127.0.0.1", 8001))

print("listening")

try:
    while True:
        sock, addr = listen_sock.accept()
        print("new sock", sock, addr)
except KeyboardInterrupt:
    print("quitting")

listen_sock.close()

On my MacOS machine, the last-started process will receive all of the connections; if you terminate that process, the other starts receiving them.

eventlet.listen() accepts kwargs reuse_addr and reuse_port that set SO_REUSEADDR and SO_REUSEPORT respectively on the listening socket. But because they’re both set, regardless of how you invoke the function, SO_REUSEPORT clobbers the role of SO_REUSEADDR in controlling whether or not there’s an error about the port already being in use. The upshot is that passing reuse_addr=False into eventlet.listen doesn’t actually prevent two processes binding to the same port, but passing reuse_port=False does.

The following Stack Overflow post has a really helpful breakdown of what these two options do: linux - How do SO_REUSEADDR and SO_REUSEPORT differ? - Stack Overflow

]]>
https://discourse.nameko.io/t/how-different-nameko-run-proccesses-share-one-listen-socket/785#post_3 Mon, 27 Feb 2023 11:30:51 +0000 discourse.nameko.io-post-1992
How different nameko run proccesses share one listen socket? When you say “a completely separate process”, you mean a separate Python process, with its own PID right?

Clearly that should not be possible, but have also noticed this behaviour, and I’m confused by it too. I have assumed it’s some problem with eventlet’s .listen(), but I’ve not spent the time to reproduce it or track it down to specific versions of eventlet/Python.

]]>
https://discourse.nameko.io/t/how-different-nameko-run-proccesses-share-one-listen-socket/785#post_2 Sun, 26 Feb 2023 16:54:07 +0000 discourse.nameko.io-post-1991
Service that dynamically spawns and kills managed threads? Your approach seems fine, but advantages of using events to do the fan-out are:

  1. You’re using Nameko’s high-level API rather than using pseudo-internals (.kill() on the managed thread is an eventlet API)
  2. If you have multiple service instances, events will allow them all to participate, whereas threads are constrained to just the instance that spawns them.
]]>
https://discourse.nameko.io/t/service-that-dynamically-spawns-and-kills-managed-threads/787#post_4 Sun, 26 Feb 2023 16:34:13 +0000 discourse.nameko.io-post-1990
Service that dynamically spawns and kills managed threads? Using an event makes a lot of sense. What I ended up implementing is a mapping (dict) in the Dependency Injector that gets handed off to each Worker that spawns the new thread (self.container.spawn_managed_thread), where each thread is keyed by a unique identifier associated external to the Service. When it comes time to stop it, the thread is keyed again in the Worker and the .kill() method is invoked. I will look at converting this to an event-driven model. Do you see any benefits either way? Thanks!

]]>
https://discourse.nameko.io/t/service-that-dynamically-spawns-and-kills-managed-threads/787#post_3 Thu, 23 Feb 2023 18:41:42 +0000 discourse.nameko.io-post-1989
Service that dynamically spawns and kills managed threads? Managed threads will run until they exit; there is no particular API to stop them again, but it can be done…

The easiest way would probably be to communicate to the thread that it should terminate. The nameko-grpc library does something like this for the threads it uses to manage each connection. See nameko-grpc/connection.py at master · nameko/nameko-grpc · GitHub

]]>
https://discourse.nameko.io/t/service-that-dynamically-spawns-and-kills-managed-threads/787#post_2 Thu, 23 Feb 2023 13:37:41 +0000 discourse.nameko.io-post-1988
Nameko v3.0.0rc11: Passing config to worker_factory Deriving the service name from config is fine. It’s one of the advantages of config being a global.

It shouldn’t matter if config.get(‘SERVICE_NAME’) is None temporarily, as long as you reimport the module after the patch is in place.

What is the error you’re seeing?

]]>
https://discourse.nameko.io/t/nameko-v3-0-0rc11-passing-config-to-worker-factory/788#post_4 Wed, 08 Feb 2023 11:45:02 +0000 discourse.nameko.io-post-1987
Nameko v3.0.0rc11: Passing config to worker_factory Thanks Matt! That works!

Another question related to MyService above: What is the best practice to set the name of a service from a config object?

When service gets imported at import service # module where MyService is defined line, config object is None (config object is not patched yet here), and the line name = config.get(‘SERVICE_NAME’) runs into an error.

]]>
https://discourse.nameko.io/t/nameko-v3-0-0rc11-passing-config-to-worker-factory/788#post_3 Tue, 07 Feb 2023 16:09:15 +0000 discourse.nameko.io-post-1986
Nameko v3.0.0rc11: Passing config to worker_factory Nameko is complaining because worker_factory expects its arguments to be dependencies to be injected. Service classes don’t need define a config dependency provider in Nameko 3, because the config object is a global.

You can update the global config object directly from your test. In fact, the config object has a patch method for this purpose. I’ve written some docs on this here Testing services -.

Your service is reading from the config object at import time, so you need to make sure your test populates the desired configuration beforehand. Moving the import inline is sufficient, or you can use importlib.reload.

This works:

from nameko.testing.services import worker_factory
from nameko import config

import service  # module where MyService is defined

import pytest
import importlib

class TestMyService:

    @pytest.fixture
    def set_config(self):
        with config.patch({"SERVICE_NAME": "MyService"}):
            yield

    def test_sum(self, set_config):
        importlib.reload(service)
        svc = worker_factory(service.MyService)

        assert( svc.name == 'MyService' )
        assert( svc.sum(2,3) == 5 )
]]>
https://discourse.nameko.io/t/nameko-v3-0-0rc11-passing-config-to-worker-factory/788#post_2 Mon, 06 Feb 2023 11:26:56 +0000 discourse.nameko.io-post-1985
Nameko v3.0.0rc11: Passing config to worker_factory Hi,

I am using v3.0.0rc11. My service is defined as below:

import logging

from nameko import config
from nameko.rpc import rpc

class MyService:
    name = config.get('SERVICE_NAME')
    logger = logging.getLogger(__name__)

    @rpc
    def sum(self, num1, num2):
        self.logger.info(f'Num1: {num1}, Num2: {num2}')
        return num1 + num2

I have written my unit test thus:

from nameko.testing.services import worker_factory

from MySvc.myservice import MyService

class TestMyService:
    def setup_method(self):
        self.cfg = {
            'SERVICE_NAME' : 'MyService'
        }

    def test_sum(self, container_factory):
        cfg = {
            'SERVICE_NAME' : 'MyService'
        }
        
        svc = worker_factory(MyService, config=cfg)

        assert( svc.name == 'MyService' )
        assert( svc.sum(2,3) == 5 )

The unit test fails, with this error:

nameko.exceptions.ExtensionNotFound: DependencyProvider(s) 'dict_keys(['config'])' not found on <class 'MySvc.myservice.MyService'>.

What am I doing wrong? How can I pass in config to worker_factory so the config gets passed in to service properly?

Thanks for all the help!

Radha.

]]>
https://discourse.nameko.io/t/nameko-v3-0-0rc11-passing-config-to-worker-factory/788#post_1 Mon, 06 Feb 2023 09:39:22 +0000 discourse.nameko.io-post-1984
Service that dynamically spawns and kills managed threads? Any suggestions on creating a nameko service that allows for an RPC call to dynamically spawn, and then later, kill, a managed thread?

So far, I’ve been able to spawn managed threads, but I’m looking for a way to stop them.

]]>
https://discourse.nameko.io/t/service-that-dynamically-spawns-and-kills-managed-threads/787#post_1 Thu, 12 Jan 2023 21:08:06 +0000 discourse.nameko.io-post-1983
How long will it take to release 3.0? Any updates? Really looking forward to official 3.0.0 release also.

]]>
https://discourse.nameko.io/t/how-long-will-it-take-to-release-3-0/591#post_5 Thu, 12 Jan 2023 21:04:38 +0000 discourse.nameko.io-post-1982
How different nameko run proccesses share one listen socket? Hi folks!
I am trying to understand how things work behind the scene.
When I run a service that uses HttpEntrypoint it binds to port 8000 and handles requests.
When I start another process in my opinion it is supposed to try to bind to the same port and fail. But it does not fail and actually starts to receive every second request.
I looked in the code of class WebServer:

def start(self):
if not self._starting:
self._starting = True

So I understand that this code creates a listening socket only once allowing many @http entrypoints. So I understand how this works in one memory space of one process.

But how a completely separate process can also share WebServer’s SharedExtension!?

Thx :slight_smile:

]]>
https://discourse.nameko.io/t/how-different-nameko-run-proccesses-share-one-listen-socket/785#post_1 Sun, 18 Dec 2022 14:43:18 +0000 discourse.nameko.io-post-1980
How to return ACK from consumer as soon as message is received/consumed? from nameko.rpc import Rpc as NamekoRpc, get_rpc_exchange, Responder
from nameko.constants import (
AMQP_SSL_CONFIG_KEY, AMQP_URI_CONFIG_KEY, DEFAULT_SERIALIZER, SERIALIZER_CONFIG_KEY
)
from nameko.exceptions import (ContainerBeingKilled, MalformedRequest)
from functools import partial
from nameko.events import EventHandler as NamekoEventHandler

class Rpc(NamekoRpc):

def __init__(self, only_once: bool = False, *args, **kwargs):
    self.only_once = only_once
    super(Rpc, self).__init__(
        *args, **kwargs
    )

def handle_message(self, body, message):
    if self.only_once is True:
        # Consume only once, automatically ack after getting the message
        self.rpc_consumer.queue_consumer.ack_message(message)
    try:
        args = body['args']
        kwargs = body['kwargs']
    except KeyError:
        raise MalformedRequest('Message missing `args` or `kwargs`')

    self.check_signature(args, kwargs)

    context_data = self.unpack_message_headers(message)

    handle_result = partial(self.handle_result, message)
    try:
        self.container.spawn_worker(self, args, kwargs,
                                    context_data=context_data,
                                    handle_result=handle_result)
    except ContainerBeingKilled:
        self.rpc_consumer.requeue_message(message)

def handle_result(self, message, worker_ctx, result, exc_info):
    amqp_uri = self.container.config[AMQP_URI_CONFIG_KEY]
    serializer = self.container.config.get(
        SERIALIZER_CONFIG_KEY, DEFAULT_SERIALIZER
    )
    exchange = get_rpc_exchange(self.container.config)
    ssl = self.container.config.get(AMQP_SSL_CONFIG_KEY)

    responder = Responder(amqp_uri, exchange, serializer, message, ssl=ssl)
    result, exc_info = responder.send_response(result, exc_info)

    if self.only_once is False:
        self.rpc_consumer.queue_consumer.ack_message(message)
    return result, exc_info

rpc = Rpc.decorator

]]>
https://discourse.nameko.io/t/how-to-return-ack-from-consumer-as-soon-as-message-is-received-consumed/765#post_2 Mon, 21 Nov 2022 16:08:47 +0000 discourse.nameko.io-post-1977
Nameko shell hangs on rpc call My guess is that the MQ connection may have been broken

]]>
https://discourse.nameko.io/t/nameko-shell-hangs-on-rpc-call/774#post_2 Tue, 25 Oct 2022 13:01:41 +0000 discourse.nameko.io-post-1971
Nameko shell hangs on rpc call Initially the nameko shell is able to communicate with the nameko service. The service is a simple hello world:

class GreetingService:
    name = 'greeting'
    @rpc
    def hello(self):
        return f"{datetime.today()}: hi"

Within the shell, I can call hello in a loop without problems. If I wait 5 minutes or more to call hello then the shell hangs indefinitely.

helo=n.rpc.greeting.hello
>>> helo()
'2022-10-03 12:40:12.628285: Hello, alarm device count=936'
>>> helo()
***hangs here***

The second helo will be stuck and impervious to keyboard interrupt. I have to kill the shell process.
Do you have any ideas why this is occurring?
Or do you have any suggestions for troubleshooting?

Environment:
nameko - 2.14.1
Windows 10
python3.9.5

]]>
https://discourse.nameko.io/t/nameko-shell-hangs-on-rpc-call/774#post_1 Mon, 03 Oct 2022 16:57:57 +0000 discourse.nameko.io-post-1967
About terminating asynchronous tasks The previous project was implemented asynchronously with djang+celery. In the asynchronous task, celery can call the revoke method through the current task_id to terminate the currently running task. Now I have used nameko for a long time and have not found a similar termination method. Is there such a method or plugin?
like:
from xxx.celery import app
app.control.revoke(current_task_id, terminate=True, signal=‘SIGUSR1’)

]]>
https://discourse.nameko.io/t/about-terminating-asynchronous-tasks/771#post_1 Tue, 28 Jun 2022 09:21:47 +0000 discourse.nameko.io-post-1964
How to return ACK from consumer as soon as message is received/consumed? I am using nameko.messaging.consume for consuming messages from a queue. Here’s a sample code -

from kombu import Queue
from nameko.messaging import consume

class Service:
    name = "sample_service"
    QUEUE = Queue("queue_name", no_declare=True)

    @consume(QUEUE)
    def process_message(self, payload):
        # Some long running code ...
        return result

By default behaviour, ACK will be sent to rabbitMQ broker after process_message function returns a response (Here, statement return result). I want to send an ACK as soon as consumer consumes the message. How can I do that?

*In library “pika”, consumer acknowledges as soon as message is consumed. That will be good example what I want to replicate with nameko’s consumer.

Thanks :blush: @mattbennett

]]>
https://discourse.nameko.io/t/how-to-return-ack-from-consumer-as-soon-as-message-is-received-consumed/765#post_1 Wed, 25 May 2022 15:47:44 +0000 discourse.nameko.io-post-1958
Error while calling Dask cluster in Nameko worker I understand, it seems that mixing up Nameko with Dask may not be as straightforward as I hoped, thank you for your help!

]]>
https://discourse.nameko.io/t/error-while-calling-dask-cluster-in-nameko-worker/758#post_4 Mon, 09 May 2022 20:46:48 +0000 discourse.nameko.io-post-1949
Error while calling Dask cluster in Nameko worker AFAIK there is no simple way to mix and match asyncio and eventlet concurrency models in the same Python process. You will get eventlloop-related errors and it’s not worth the time to debug those. Best to keep code that uses nameko/eventlet in a separate process or service, away from the async/await code.

]]>
https://discourse.nameko.io/t/error-while-calling-dask-cluster-in-nameko-worker/758#post_3 Tue, 03 May 2022 19:24:25 +0000 discourse.nameko.io-post-1948
Unit testing without a real rabbitmq instance for unit testing you can use the decorator:

nameko.config.patch({"AMQP_URI": "memory://"})

to test events without having a real MQ instance.

]]>
https://discourse.nameko.io/t/unit-testing-without-a-real-rabbitmq-instance/743#post_3 Thu, 21 Apr 2022 08:50:36 +0000 discourse.nameko.io-post-1947
Error while calling Dask cluster in Nameko worker probably because asyncio and eventlet works differently. Nameko is build on top of eventlet and uses implicit switching between co-routines. Asyncio uses explicit yield.

]]>
https://discourse.nameko.io/t/error-while-calling-dask-cluster-in-nameko-worker/758#post_2 Thu, 21 Apr 2022 08:48:25 +0000 discourse.nameko.io-post-1946
Retry decorater I believe there is something odd in the utils/retry.py decorator which should be put to attention. Not sure if it is intentional or not but at least I spent some time on it to discover the reason behind :wink:

the issue is that the delay time is not reset after a successful call. With this behaviour you could basically create an indefinite sleep period if you don’t use the max_delay property, already after a few calls if you have a reasonable sized backoff time.

as the RetryDelay class is not locally scoped the state will preserved basically within your worker class. This is not a major issue however, if you have a few failures your sleep time will increase very rapidly for future failed requests.

I believe we should reset the delay to the initial value after a successful call. I don’t think the max delay is fair in this one as it will then basically replace the whole idea behind incremental increase of the sleeping time.

I can make a PR but I would first better understand why this has been designed in this way. At least it should be documented I think.

]]>
https://discourse.nameko.io/t/retry-decorater/763#post_1 Wed, 20 Apr 2022 23:26:15 +0000 discourse.nameko.io-post-1945
Error while calling Dask cluster in Nameko worker Hi,

I’m trying to use Dask within a Nameko service, but I’m running into some issues.
Here is a sample code I use:


from nameko.rpc import rpc

from dask.distributed import Client
import dask.array as da

# creates a local Dask Cluster
client = Client()

class TestService(object):

    @rpc
    def task(self) -> str:

        dim = 100000
        chunk = 10000

        x = da.random.random((dim, dim), chunks=(chunk, chunk))
        y = da.exp(x).sum()

        y.compute()

Unfortunately, the following exception is raised at startup:


Traceback (most recent call last):
  
   ....

    main(args)
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/nameko/cli/run.py", line 179, in main
    import_service(path)
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/nameko/cli/run.py", line 44, in import_service
    __import__(module_name)
  File "/home/user/test_service/start.py", line 5, in <module>
    from test_service import TestService
  File "/home/user/test_service/test_service.py", line 16, in <module>
    from dask.distributed import Client
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/dask/distributed.py", line 11, in <module>
    from distributed import *
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/distributed/__init__.py", line 7, in <module>
    from .actor import Actor, ActorFuture
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/distributed/actor.py", line 5, in <module>
    from .client import Future
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/distributed/client.py", line 59, in <module>
    from .batched import BatchedSend
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/distributed/batched.py", line 10, in <module>
    from .core import CommClosedError
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/distributed/core.py", line 28, in <module>
    from .comm import (
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/distributed/comm/__init__.py", line 25, in <module>
    _register_transports()
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/distributed/comm/__init__.py", line 17, in _register_transports
    from . import inproc, tcp, ws
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/distributed/comm/tcp.py", line 387, in <module>
    class BaseTCPConnector(Connector, RequireEncryptionMixin):
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/distributed/comm/tcp.py", line 389, in BaseTCPConnector
    _resolver = netutil.ExecutorResolver(close_executor=False, executor=_executor)
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/tornado/util.py", line 288, in __new__
    instance.initialize(*args, **init_kwargs)
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/tornado/netutil.py", line 427, in initialize
    self.io_loop = IOLoop.current()
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/tornado/ioloop.py", line 263, in current
    loop = asyncio.get_event_loop()
  File "/home/user/.pyenv/versions/3.9.9/lib/python3.9/asyncio/events.py", line 642, in get_event_loop
    raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'MainThread'.

I worked this around by setting an asyncio event loop before dask.distributed is imported:

import asyncio

loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)

from nameko.rpc import rpc
from dask.distributed import Client
import dask.array as da
client = Client()

class TestService(object):
    ...

Now, when the RPC is called and the task is executed, I’m getting a different error:

ERROR:    | 2022-04-04 09:58:56.758 | nameko.containers::/...te-packages/nameko/containers.py:399 
error handling worker <WorkerContext [test_service.task] at 0x7fb47a9bb8b0>: 'coroutine' object is not iterable
Traceback (most recent call last):
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/nameko/containers.py", line 392, in _run_worker
    result = method(*worker_ctx.args, **worker_ctx.kwargs)
  File "/home/user/test_service/test_service.py", line 48, in pretend_task
    y.compute()
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/dask/base.py", line 288, in compute
    (result,) = compute(self, traverse=False, **kwargs)
  File "/home/user/test_service/.venv/lib/python3.9/site-packages/dask/base.py", line 572, in compute
    return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
TypeError: 'coroutine' object is not iterable
/home/user/test_service/.venv/lib/python3.9/site-packages/nameko/containers.py:417: RuntimeWarning: coroutine 'Client._gather' was never awaited
  del exc_info

I tried to place the Client() call within the RPC (among other attempts) but I clearly need to understand why I’m getting this error when interacting with Dask. Any suggestion? Thanks!

]]>
https://discourse.nameko.io/t/error-while-calling-dask-cluster-in-nameko-worker/758#post_1 Mon, 04 Apr 2022 10:05:15 +0000 discourse.nameko.io-post-1925
Nameko/Django ORM integration Hello @jesusenlanet

Is there a reason the project was discontinued? Seems like a neat tool to transition from django project to nameko microservices!

]]>
https://discourse.nameko.io/t/nameko-django-orm-integration/166#post_11 Tue, 29 Mar 2022 03:03:47 +0000 discourse.nameko.io-post-1919
Config.get not work in nameko 2.14 Hello!

It seems that your autocomplete feature of you IDE is pointing you to another type of Config.

I’ve created a small test service with nameko 2.14.0:
service.py

from nameko.dependency_providers import Config
from nameko.web.handlers import http


class Service:

    name = "test_config"

    config = Config()

    @http("GET", "/")
    def foo(self, request):
        return f'Config Mongo url is {self.config.get("MONGODB_URI", "http://localhost:27017")}'

A config.yml

MONGODB_URI: "http://mongodb.com:27017"

And works as expected:

$ nameko run service --config config.yml

image

Try to run your service, it looks good. I don’t use pyright, but it seems to interfere with nameko’s config type on autocomplete.

Cheers!

]]>
https://discourse.nameko.io/t/config-get-not-work-in-nameko-2-14/748#post_2 Wed, 23 Mar 2022 19:23:36 +0000 discourse.nameko.io-post-1918
Potential concurrency issues with pyscopg2? Hello!

I found this interesting question regarding nameko, sqlalchemy and psycopg2, but unfortunately, is unanswered: python - Potential Nameko concurrency issues with pyscopg2 driver for Postgresql - Stack Overflow

A brief excerpt of the question:

We are building a Python microservices application with Posgresql as service datastore. At first glance Nameko seems a good starting point. However the Nameko documentation section on Concurrency includes this statement […]
Our architect is suggesting Nameko is therefore not going to fly - because although the pyscopg2 Postgresql driver is advertised as thread safe:
With a link clarifying:

Warning Psycopg connections are not green thread safe and can’t be used concurrently by different green threads.

Personally, I have used Nameko-SQLAlchemy with Psycopg2 without problems, but I’m by no means a threading expert to know it’s safe.

What do you guys think?

Cheers!

]]>
https://discourse.nameko.io/t/potential-concurrency-issues-with-pyscopg2/752#post_1 Wed, 23 Mar 2022 19:11:35 +0000 discourse.nameko.io-post-1917
Config.get not work in nameko 2.14 I try create a service for mongodb , but when I try get MONGO_URI using COnf, display method not found.

]]>
https://discourse.nameko.io/t/config-get-not-work-in-nameko-2-14/748#post_1 Mon, 07 Mar 2022 01:46:57 +0000 discourse.nameko.io-post-1913
How can I differentiate an RpcProxy call from a ClusterRpcProxy call? So, I’ve setup multiple nameko services, where some of them call between each other (inter-service calls), using RpcProxy. But, along with that, I’ve also setup a fastapi service that calls those services via ClusterRpcProxy.

Is it possible to know that a call comes via RpcProxy (inter-service) vs. ClusterRpcProxy (external service)? I know they both use the worker’s context_data, but I was not able to find an attribute that would help me to know from where the service call is coming.

]]>
https://discourse.nameko.io/t/how-can-i-differentiate-an-rpcproxy-call-from-a-clusterrpcproxy-call/744#post_1 Tue, 01 Mar 2022 12:19:23 +0000 discourse.nameko.io-post-1909
Unit testing without a real rabbitmq instance For Unit testing, you can use worker_factory and test without RabbitMQ.

For Integration Testing, container_factory can spawn multiple services and communicate within them, that’s why it needs a RabbitMQ connection.

Is there a reason you need container_factory for Unit Testing? You could spawn more than one instance with worker_factory, although without communication within them.

]]>
https://discourse.nameko.io/t/unit-testing-without-a-real-rabbitmq-instance/743#post_2 Mon, 28 Feb 2022 17:54:09 +0000 discourse.nameko.io-post-1908
Unit testing without a real rabbitmq instance Hi,

I am writing pytest based unit tests for my nameko services as outlined here: Testing Services — nameko 2.12.0 documentation. I’ve been using container_factory for testing my services for the most part. A few instances where I’ve used worker_factory.

However, it looks like there is no way I can execute these tests without a real rabbitmq instance. This is forcing us to tweak our CI infrastructure to accommodate this.

Is there a way to execute nameko unit tests without a real rabbitmq instance?

Thanks,
Radha

]]>
https://discourse.nameko.io/t/unit-testing-without-a-real-rabbitmq-instance/743#post_1 Wed, 23 Feb 2022 15:50:02 +0000 discourse.nameko.io-post-1907
How upgrade from version 2.1x to version 3 hi @lockeduan. I missed thread, sorry. This same question was asked and answered here Time to go 2 -> 3? - #3 by geoffjukes

]]>
https://discourse.nameko.io/t/how-upgrade-from-version-2-1x-to-version-3/727#post_2 Tue, 15 Feb 2022 10:46:01 +0000 discourse.nameko.io-post-1906
Time to go 2 -> 3? Hey @geoffjukes,

Since I have already switched to v3 (rc10), there only few things to be careful about.

That’s all at least for my case. I haven’t found anything else required to change after switching to 3.0.0rc10.

Happy upgrading :slight_smile:

Spyros

]]>
https://discourse.nameko.io/t/time-to-go-2-3/740#post_4 Tue, 15 Feb 2022 06:58:14 +0000 discourse.nameko.io-post-1905
Time to go 2 -> 3? Awesome, thanks for confirming @mattbennett! I’ll build the new services on v3.

You don’t hear from me often, because I rarely have any issues - which is a testament to the software you’ve written. I’m so glad I found it all those years ago…!

Geoff

]]>
https://discourse.nameko.io/t/time-to-go-2-3/740#post_3 Fri, 11 Feb 2022 22:28:48 +0000 discourse.nameko.io-post-1902
Time to go 2 -> 3? Hey Geoff!

Always good to hear your feedback :smiley:

Yes, v3 is the way forwards. It is inching closer to being officially released, and certainly many users I know about now are using v3. There is now an upgrade guide too! You can read that on the draft version of the new docs https://agitated-hoover-1d4d78.netlify.app.

Matt.

]]>
https://discourse.nameko.io/t/time-to-go-2-3/740#post_2 Fri, 11 Feb 2022 17:16:27 +0000 discourse.nameko.io-post-1901
Time to go 2 -> 3? Hi,

Still loving Nameko. I’ve put off the move from 2 to 3 until release - but like BOTW2, it’s in limbo.

I’m starting a full refactor of the services I run that use 2, so I figure now is the best time to try out 3.

Are there any major changes to keep an eye out for? Aside from RpcProxy now being named ServiceRpc?

Hope all is well. 2020-2021 were strange years.

Geoff

]]>
https://discourse.nameko.io/t/time-to-go-2-3/740#post_1 Wed, 09 Feb 2022 19:17:50 +0000 discourse.nameko.io-post-1900
RPC-extension and heartbeat if my service doing long task (above 20min) Hi,

i have same issue with training a ML model.

After train ended (about 1h) i received the error

Connection to broker lost, trying to re-establish connection...
Traceback (most recent call last):
  File "/Users/.../lib/python3.8/site-packages/kombu/mixins.py", line 193, in consume
    conn.drain_events(timeout=safety_interval)
  File "/Users/.../lib/python3.8/site-packages/kombu/connection.py", line 317, in drain_events
    return self.transport.drain_events(self.connection, **kwargs)
  File "/Users/.../lib/python3.8/site-packages/kombu/transport/pyamqp.py", line 169, in drain_events
    return connection.drain_events(**kwargs)
  File "/Users/.../lib/python3.8/site-packages/amqp/connection.py", line 522, in drain_events
    while not self.blocking_read(timeout):
  File "/Users/.../lib/python3.8/site-packages/amqp/connection.py", line 527, in blocking_read
    frame = self.transport.read_frame()
  File "/Users/.../lib/python3.8/site-packages/amqp/transport.py", line 310, in read_frame
    frame_header = read(7, True)
  File "/Users/.../lib/python3.8/site-packages/amqp/transport.py", line 639, in _read
    s = recv(n - len(rbuf))
  File "/Users/.../lib/python3.8/site-packages/eventlet/greenio/base.py", line 370, in recv
    return self._recv_loop(self.fd.recv, b'', bufsize, flags)
  File "/Users/.../lib/python3.8/site-packages/eventlet/greenio/base.py", line 364, in _recv_loop
    self._read_trampoline()
  File "/Users/.../lib/python3.8/site-packages/eventlet/greenio/base.py", line 332, in _read_trampoline
    self._trampoline(
  File "/Users/.../lib/python3.8/site-packages/eventlet/greenio/base.py", line 211, in _trampoline
    return trampoline(fd, read=read, write=write, timeout=timeout,
  File "/Users/.../lib/python3.8/site-packages/eventlet/hubs/__init__.py", line 159, in trampoline
    return hub.switch()
  File "/Users/t.../lib/python3.8/site-packages/eventlet/hubs/hub.py", line 313, in switch
    return self.greenlet.switch()
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/.../lib/python3.8/site-packages/kombu/mixins.py", line 171, in run
    for _ in self.consume(limit=None, **kwargs):
  File "/Users/.../lib/python3.8/site-packages/kombu/mixins.py", line 195, in consume
    conn.heartbeat_check()
  File "/Users/.../lib/python3.8/site-packages/kombu/connection.py", line 306, in heartbeat_check
    return self.transport.heartbeat_check(self.connection, rate=rate)
  File "/Users/.../lib/python3.8/site-packages/kombu/transport/pyamqp.py", line 220, in heartbeat_check
    return connection.heartbeat_tick(rate=rate)
  File "/Users/.../lib/python3.8/site-packages/amqp/connection.py", line 767, in heartbeat_tick
    raise ConnectionForced('Too many heartbeats missed')
amqp.exceptions.ConnectionForced: Too many heartbeats missed

Can you tell me how you fixed it?

]]>
https://discourse.nameko.io/t/rpc-extension-and-heartbeat-if-my-service-doing-long-task-above-20min/248#post_6 Sat, 29 Jan 2022 18:41:44 +0000 discourse.nameko.io-post-1898
Authentication/ Authorization with Websockets Hi,

I am looking for best practices in implementation schemes for authenticating websocket communication. At the moment, this is the workflow I am considering:

  1. Client (Web UI) gets a JWT from invoking a REST API endpoint - /login.
  2. Client initiates a websocket session - subscribing to a set of events, passing the JWT (somehow).
  3. Upon authenticating JWT, server begins sending events based on the socket connection.

What are the best practices for all passing JWT while initiating a websocket session? And how can I parse it at the websocket server end?

Appreciate all guidance in this matter.

Thanks,
Radha.

]]>
https://discourse.nameko.io/t/authentication-authorization-with-websockets/737#post_1 Mon, 24 Jan 2022 18:55:08 +0000 discourse.nameko.io-post-1896
How upgrade from version 2.1x to version 3 Hi, we are using nameko 2.12 on production, should we keep using 2, or should we upgrade to 3? and, is there any simple way to upgrade? Thanks

]]>
https://discourse.nameko.io/t/how-upgrade-from-version-2-1x-to-version-3/727#post_1 Fri, 17 Dec 2021 06:25:50 +0000 discourse.nameko.io-post-1886
Nameko test: Module already imported warning This warning has been suppressed in the latest releases (2.14.1 and 3.0.0-rc11)

]]>
https://discourse.nameko.io/t/nameko-test-module-already-imported-warning/714#post_4 Mon, 06 Dec 2021 10:01:26 +0000 discourse.nameko.io-post-1882