瓶子是否处理没有并发性的请求?[英] Does bottle handle requests with no concurrency?

本文是小编为大家收集整理的关于瓶子是否处理没有并发性的请求?的处理/解决方法,可以参考本文帮助大家快速定位并解决问题,中文翻译不准确的可切换到English标签页查看源文。

问题描述

首先,我认为瓶子会同时处理请求,所以我写了测试代码bellow:

import json
from bottle import Bottle, run, request, response, get, post
import time

app = Bottle()
NUMBERS = 0


@app.get("/test")
def test():
    id = request.query.get('id', 0)
    global NUMBERS
    n = NUMBERS
    time.sleep(0.2)
    n += 1
    NUMBERS = n
    return id


@app.get("/status")
def status():
    return json.dumps({"numbers": NUMBERS})


run(app, host='0.0.0.0', port=8000)

然后,我使用jmeter请求/test url,带有10次循环20次.

之后,/status给了我{"numbers": 200},看起来那瓶没有同时处理请求.

我有误会吗?

更新

我进行了另一项测试,我认为它可以证明瓶子与请求一一处理(没有并发).我对test函数做了一些更改:

@app.get("/test")
def test():
    t1 = time.time()
    time.sleep(5)
    t2 = time.time()
    return {"t1": t1, "t2": t2}

,当我在浏览器中两次访问/test时,我会得到:

{
    "t2": 1415941221.631711,
    "t1": 1415941216.631761
}
{
    "t2": 1415941226.643427,
    "t1": 1415941221.643508
}

推荐答案

并发不是您的Web框架的函数 - 它是您用于使用它的Web服务器的函数.由于瓶子符合WSGI,这意味着您可以通过任何WSGI服务器提供瓶装应用程序:

  • wsgiref(python stdlib中的参考服务器)将不给您任何并发.
  • Cherrypy通过线程池调度(同时请求的数量=使用的线程数).
  • nginx + uwsgi为您提供多进程dispatch 和每个进程的多个线程.
  • gevent为您提供了轻巧的固定条件,在您的用例中,如果您的应用程序大部分是IO-或DATABASE-绑定.

后两个可以提供大量的同时连接.

根据 http://bottlepy.org/docs/dev/api.html 特定说明,bottle.run使用wsgiref为您的应用程序服务,这解释了为什么它一次只能一次处理一个请求.

本文地址:https://www.itbaoku.cn/post/1975346.html

问题描述

At first, I think Bottle will handle requests concurrently, so I wrote test code bellow:

import json
from bottle import Bottle, run, request, response, get, post
import time

app = Bottle()
NUMBERS = 0


@app.get("/test")
def test():
    id = request.query.get('id', 0)
    global NUMBERS
    n = NUMBERS
    time.sleep(0.2)
    n += 1
    NUMBERS = n
    return id


@app.get("/status")
def status():
    return json.dumps({"numbers": NUMBERS})


run(app, host='0.0.0.0', port=8000)

Then I use jmeter to request /test url with 10 threads loops 20 times.

After that, /status gives me {"numbers": 200}, which seems like that bottle does not handle requests concurrently.

Did I misunderstand anything?

UPDATE

I did another test, I think it can prove that bottle deal with requests one by one(with no concurrency). I did a little change to the test function:

@app.get("/test")
def test():
    t1 = time.time()
    time.sleep(5)
    t2 = time.time()
    return {"t1": t1, "t2": t2}

And when I access /test twice in a browser I get:

{
    "t2": 1415941221.631711,
    "t1": 1415941216.631761
}
{
    "t2": 1415941226.643427,
    "t1": 1415941221.643508
}

推荐答案

Concurrency isn't a function of your web framework -- it's a function of the web server you use to serve it. Since Bottle is WSGI-compliant, it means you can serve Bottle apps through any WSGI server:

  • wsgiref (reference server in the Python stdlib) will give you no concurrency.
  • CherryPy dispatches through a thread pool (number of simultaneous requests = number of threads it's using).
  • nginx + uwsgi gives you multiprocess dispatch and multiple threads per process.
  • Gevent gives you lightweight coroutines that, in your use case, can easily achieve C10K+ with very little CPU load (on Linux -- on Windows it can only handle 1024 simultaneous open sockets) if your app is mostly IO- or database-bound.

The latter two can serve massive numbers of simultaneous connections.

According to http://bottlepy.org/docs/dev/api.html , when given no specific instructions, bottle.run uses wsgiref to serve your application, which explains why it's only handling one request at once.