diff --git a/.gitignore b/.gitignore
index 2d069b63..324cdcdc 100644
--- a/.gitignore
+++ b/.gitignore
@@ -13,3 +13,4 @@ acknowledgements.html
epilogue_1_how_to_get_there_from_here.html
epilogue_2_footguns.html
images/*.html
+.idea/
diff --git a/Readme.md b/Readme.md
index 37b7dba3..4248b186 100644
--- a/Readme.md
+++ b/Readme.md
@@ -6,35 +6,39 @@
## Table of Contents
+目录
O'Reilly have generously said that we will be able to publish this book under a [CC license](license.txt),
In the meantime, pull requests, typofixes, and more substantial feedback + suggestions are enthusiastically solicited.
-| Chapter | |
-| ------- | ----- |
-| [Preface](preface.asciidoc) | |
-| [Introduction: Why do our designs go wrong?](introduction.asciidoc)| ||
-| [**Part 1 Intro**](part1.asciidoc) | |
-| [Chapter 1: Domain Model](chapter_01_domain_model.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Chapter 2: Repository](chapter_02_repository.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Chapter 3: Interlude: Abstractions](chapter_03_abstractions.asciidoc) | |
-| [Chapter 4: Service Layer (and Flask API)](chapter_04_service_layer.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Chapter 5: TDD in High Gear and Low Gear](chapter_05_high_gear_low_gear.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Chapter 6: Unit of Work](chapter_06_uow.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Chapter 7: Aggregates](chapter_07_aggregate.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [**Part 2 Intro**](part2.asciidoc) | |
-| [Chapter 8: Domain Events and a Simple Message Bus](chapter_08_events_and_message_bus.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Chapter 9: Going to Town on the MessageBus](chapter_09_all_messagebus.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Chapter 10: Commands](chapter_10_commands.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Chapter 11: External Events for Integration](chapter_11_external_events.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Chapter 12: CQRS](chapter_12_cqrs.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Chapter 13: Dependency Injection](chapter_13_dependency_injection.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Epilogue: How do I get there from here?](epilogue_1_how_to_get_there_from_here.asciidoc) | |
-| [Appendix A: Recap table](appendix_ds1_table.asciidoc) | |
-| [Appendix B: Project Structure](appendix_project_structure.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Appendix C: A major infrastructure change, made easy](appendix_csvs.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Appendix D: Django](appendix_django.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
-| [Appendix F: Validation](appendix_validation.asciidoc) | |
+O'Reilly 大方地表示,我们将能够以 [CC 许可证](license.txt) 发布本书。
+与此同时,我们热情欢迎有关拉取请求、错别字修正以及更深入的反馈与建议。
+
+| Chapter
章节 | |
+|--------------------------------------------------------------------------------------------------------------------------| ----- |
+| [Preface
前言(已翻译)](preface.asciidoc) | |
+| [Introduction: Why do our designs go wrong?
引言:为什么我们的设计会出问题?(已翻译)](introduction.asciidoc) | ||
+| [**Part 1 Intro
第一部分简介(已翻译)**](part1.asciidoc) | |
+| [Chapter 1: Domain Model
第一章:领域模型(已翻译)](chapter_01_domain_model.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Chapter 2: Repository
第二章:仓储(已翻译)](chapter_02_repository.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Chapter 3: Interlude: Abstractions
第三章:插曲:抽象(已翻译)](chapter_03_abstractions.asciidoc) | |
+| [Chapter 4: Service Layer (and Flask API)
第四章:服务层(和 Flask API)(已翻译)](chapter_04_service_layer.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Chapter 5: TDD in High Gear and Low Gear
第五章:高速档与低速档中的测试驱动开发(TDD)(已翻译)](chapter_05_high_gear_low_gear.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Chapter 6: Unit of Work
第六章:工作单元(已翻译)](chapter_06_uow.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Chapter 7: Aggregates
第七章:聚合(已翻译)](chapter_07_aggregate.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [**Part 2 Intro
第二部分简介(已翻译)**](part2.asciidoc) | |
+| [Chapter 8: Domain Events and a Simple Message Bus
第八章:领域事件与简单消息总线(已翻译)](chapter_08_events_and_message_bus.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Chapter 9: Going to Town on the MessageBus
第九章:深入探讨消息总线(已翻译)](chapter_09_all_messagebus.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Chapter 10: Commands
第十章:命令(已翻译)](chapter_10_commands.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Chapter 11: External Events for Integration
第十一章:集成外部事件(已翻译)](chapter_11_external_events.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Chapter 12: CQRS
第十二章:命令查询责任分离(已翻译)](chapter_12_cqrs.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Chapter 13: Dependency Injection
第十三章:依赖注入(已翻译)](chapter_13_dependency_injection.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Epilogue: How do I get there from here?
尾声:我该如何开始?(已翻译)](epilogue_1_how_to_get_there_from_here.asciidoc) | |
+| [Appendix A: Recap table
附录A:总结表格(已翻译)](appendix_ds1_table.asciidoc) | |
+| [Appendix B: Project Structure
附录B:项目结构(已翻译)](appendix_project_structure.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Appendix C: A major infrastructure change, made easy
附录C:轻松替换重要的基础设施(已翻译)](appendix_csvs.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Appendix D: Django
附录D:Django(已翻译)](appendix_django.asciidoc) | [](https://travis-ci.org/cosmicpython/code) |
+| [Appendix F: Validation
附录F:校验(已翻译)](appendix_validation.asciidoc) | |
diff --git a/appendix_csvs.asciidoc b/appendix_csvs.asciidoc
index 5da0d027..0eb750cb 100644
--- a/appendix_csvs.asciidoc
+++ b/appendix_csvs.asciidoc
@@ -1,17 +1,23 @@
[[appendix_csvs]]
[appendix]
== Swapping Out the Infrastructure: [.keep-together]#Do Everything with CSVs#
+更换基础设施:用CSV完成一切
((("CSVs, doing everything with", id="ix_CSV")))
This appendix is intended as a little illustration of the benefits of the
Repository, Unit of Work, and Service Layer patterns. It's intended to
follow from <>.
+本附录旨在稍作说明 _仓储_、工作单元和服务层模式的优势。它是为了衔接<>的内容。
+
Just as we finish building out our Flask API and getting it ready for release,
the business comes to us apologetically, saying they're not ready to use our API
and asking if we could build a thing that reads just batches and orders from a couple of
CSVs and outputs a third CSV with allocations.
+就在我们完成 _Flask_ API 的构建并准备发布时,业务团队带着歉意找到我们,说他们还没准备好使用我们的 API,
+并询问我们是否能构建一个能够从几个 CSV 中读取批次和订单数据,并输出第三个包含分配结果的 CSV 的工具。
+
Ordinarily this is the kind of thing that might have a team cursing and spitting
and making notes for their memoirs. But not us! Oh no, we've ensured that
our infrastructure concerns are nicely decoupled from our domain model and
@@ -19,10 +25,16 @@ service layer. Switching to CSVs will be a simple matter of writing a couple
of new `Repository` and `UnitOfWork` classes, and then we'll be able to reuse
_all_ of our logic from the domain layer and the service layer.
+通常情况下,这种需求可能会让团队咒骂连连、怒气冲天,并将其记入他们的回忆录。但我们不一样!哦不,
+我们已经确保我们的基础设施逻辑与领域模型和服务层完美解耦。切换到 CSV 只需要编写几个新的 `仓储` 和 `工作单元` 类就可以了,
+之后我们就能够重用领域层和服务层的 _所有_ 逻辑。
+
Here's an E2E test to show you how the CSVs flow in and out:
+下面是一个端到端(E2E)测试,向你展示 CSV 数据是如何流入和流出的:
+
[[first_csv_test]]
-.A first CSV test (tests/e2e/test_csv.py)
+.A first CSV test (tests/e2e/test_csv.py)(第一个 CSV 测试)
====
[source,python]
----
@@ -58,9 +70,11 @@ def test_cli_app_reads_csvs_with_batches_and_orders_and_outputs_allocations(make
Diving in and implementing without thinking about repositories and all
that jazz, you might start with something like this:
+如果不考虑 _仓储_ 等各种模式,直接开始实现,你可能会从类似这样的代码入手:
+
[[first_cut_csvs]]
-.A first cut of our CSV reader/writer (src/bin/allocate-from-csv)
+.A first cut of our CSV reader/writer (src/bin/allocate-from-csv)(CSV 读写器的初步实现)
====
[source,python]
[role="non-head"]
@@ -120,12 +134,16 @@ if __name__ == "__main__":
It's not looking too bad! And we're reusing our domain model objects
and our domain service.
+看起来还不错!而且我们复用了领域模型对象和领域服务。
+
But it's not going to work. Existing allocations need to also be part
of our permanent CSV storage. We can write a second test to force us to improve
things:
+但这行不通。现有的分配也需要成为我们永久 CSV 存储的一部分。我们可以编写第二个测试来促使我们改进:
+
[[second_csv_test]]
-.And another one, with existing allocations (tests/e2e/test_csv.py)
+.And another one, with existing allocations (tests/e2e/test_csv.py)(另一个现有分配的测试)
====
[source,python]
----
@@ -164,11 +182,18 @@ def test_cli_app_also_reads_existing_allocations_and_can_append_to_them(make_csv
And we could keep hacking about and adding extra lines to that `load_batches` function,
and some sort of way of tracking and saving new allocations—but we already have a model for doing that! It's called our Repository and Unit of Work patterns.
+我们可以继续不断折腾,在 `load_batches` 函数中添加额外的代码,以及某种方式来跟踪和保存新的分配——但我们已经
+有一个现成的模型来处理这些问题了!这就是我们的 _仓储_ 和工作单元模式。
+
All we need to do ("all we need to do") is reimplement those same abstractions, but
with CSVs underlying them instead of a database. And as you'll see, it really is relatively straightforward.
+我们所需要做的(“我们所需要做的”)只是重新实现这些相同的抽象,但用 CSV 作为其底层存储,而不是数据库。
+正如你将看到的,这实际上相对来说相当简单。
+
=== Implementing a Repository and Unit of Work for CSVs
+为 CSV 实现一个 _仓储_ 和工作单元
((("repositories", "CSV-based repository")))
@@ -178,8 +203,12 @@ different CSVs_ (one for batches and one for allocations), and it gives us just
the familiar `.list()` API, which provides the illusion of an in-memory
collection of domain objects:
+以下是一个基于 CSV 的 _仓储_ 的实现示例。它抽象了从磁盘读取 CSV 的所有逻辑,
+包括必须读取 _两个不同的 CSV_ (一个用于批次,一个用于分配)的事实,并为我们提供了熟悉的 `.list()` API,
+这营造出一个内存中领域对象集合的假象:
+
[[csv_repository]]
-.A repository that uses CSV as its storage mechanism (src/allocation/service_layer/csv_uow.py)
+.A repository that uses CSV as its storage mechanism (src/allocation/service_layer/csv_uow.py)(一个使用 CSV 作为存储机制的仓储)
====
[source,python]
----
@@ -229,10 +258,12 @@ class CsvRepository(repository.AbstractRepository):
((("Unit of Work pattern", "UoW for CSVs")))
And here's what a UoW for CSVs would look like:
+以下是基于 CSV 的工作单元 (UoW) 的实现示例:
+
[[csvs_uow]]
-.A UoW for CSVs: commit = csv.writer (src/allocation/service_layer/csv_uow.py)
+.A UoW for CSVs: commit = csv.writer (src/allocation/service_layer/csv_uow.py)(基于 CSV 的工作单元:commit = csv.writer)
====
[source,python]
----
@@ -261,9 +292,12 @@ and allocations to CSV is pared down to what it should be—a bit
of code for reading order lines, and a bit of code that invokes our
_existing_ service layer:
+一旦我们实现了这些,我们的 CLI 应用程序,用于读取和写入批次和分配到 CSV,就可以被简化为它应有的样子——一些用于读取订单项的代码,
+以及一些调用我们 _现有_ 服务层的代码:
+
[role="nobreakinside less_space"]
[[final_cli]]
-.Allocation with CSVs in nine lines (src/bin/allocate-from-csv)
+.Allocation with CSVs in nine lines (src/bin/allocate-from-csv)(九行代码实现用 CSV 进行分配)
====
[source,python]
----
@@ -283,6 +317,12 @@ def main(folder):
((("CSVs, doing everything with", startref="ix_CSV")))
Ta-da! _Now are y'all impressed or what_?
+瞧! _现在你们是不是感到惊叹了?_
+
Much love,
+满怀敬意,
+
Bob and Harry
+
+Bob 和 Harry
diff --git a/appendix_django.asciidoc b/appendix_django.asciidoc
index 3c231ae1..3c7b383e 100644
--- a/appendix_django.asciidoc
+++ b/appendix_django.asciidoc
@@ -1,6 +1,7 @@
[[appendix_django]]
[appendix]
== Repository and Unit of Work [.keep-together]#Patterns with Django#
+在 Django 中使用 _仓储_ 和工作单元模式
((("Django", "installing")))
((("Django", id="ix_Django")))
@@ -8,6 +9,9 @@ Suppose you wanted to use Django instead of SQLAlchemy and Flask. How
might things look? The first thing is to choose where to install it. We put it in a separate
package next to our main allocation code:
+假设你想使用 Django 来替代 SQLAlchemy 和 Flask。那么,应该如何实现呢?首先,需要选择在哪里安装它。
+我们将其放在一个与我们的主要分配代码相邻的独立包中:
+
[[django_tree]]
====
@@ -52,6 +56,9 @@ package next to our main allocation code:
The code for this appendix is in the
appendix_django branch https://oreil.ly/A-I76[on GitHub]:
+本附录的代码位于
+appendix_django 分支 https://oreil.ly/A-I76[在 GitHub 上]:
+
----
git clone https://github.com/cosmicpython/code.git
cd code
@@ -60,10 +67,13 @@ git checkout appendix_django
Code examples follows on from the end of <>.
+代码示例接续自 <> 的结尾。
+
====
=== Repository Pattern with Django
+使用 Django 的仓储模式
((("pytest", "pytest-django plug-in")))
((("Repository pattern", "with Django", id="ix_RepoDjango")))
@@ -72,12 +82,16 @@ We used a plugin called
https://github.com/pytest-dev/pytest-django[`pytest-django`] to help with test
database management.
+我们使用了一个名为 https://github.com/pytest-dev/pytest-django[`pytest-django`] 的插件来帮助管理测试数据库。
+
Rewriting the first repository test was a minimal change—just rewriting
some raw SQL with a call to the Django ORM/QuerySet language:
+重写第一个仓储测试是一个最小化的改动——只是用调用 Django ORM/QuerySet 语言来重写了一些原始 SQL:
+
[[django_repo_test1]]
-.First repository test adapted (tests/integration/test_repository.py)
+.First repository test adapted (tests/integration/test_repository.py)(调整后的第一个仓储测试)
====
[source,python]
----
@@ -103,8 +117,10 @@ def test_repository_can_save_a_batch():
The second test is a bit more involved since it has allocations,
but it is still made up of familiar-looking Django code:
+第二个测试稍微复杂一些,因为它涉及分配,但它仍然由看起来熟悉的 Django 代码组成:
+
[[django_repo_test2]]
-.Second repository test is more involved (tests/integration/test_repository.py)
+.Second repository test is more involved (tests/integration/test_repository.py)(第二个仓储测试更加复杂)
====
[source,python]
----
@@ -135,9 +151,11 @@ def test_repository_can_retrieve_a_batch_with_allocations():
Here's how the actual repository ends up looking:
+实际的仓储最终如下所示:
+
[[django_repository]]
-.A Django repository (src/allocation/adapters/repository.py)
+.A Django repository (src/allocation/adapters/repository.py)(一个 Django 仓储)
====
[source,python]
----
@@ -168,15 +186,22 @@ The DRY-Python project people have built a tool called
https://mappers.readthedocs.io/en/latest[mappers] that looks like it might
help minimize boilerplate for this sort of thing.]
+你可以看到,该实现依赖于 Django 模型中一些自定义方法来在我们的领域模型之间进行转换。脚注:
+DRY-Python 项目的开发者构建了一个名为 https://mappers.readthedocs.io/en/latest[mappers] 的工具,
+看起来它可能有助于减少此类代码的样板。
+
==== Custom Methods on Django ORM Classes to Translate to/from Our Domain Model
+在 Django ORM 类上定义自定义方法用于在我们的领域模型之间进行转换
((("domain model", "Django custom ORM methods for conversion")))
((("object-relational mappers (ORMs)", "Django, custom methods to translate to/from domain model")))
Those custom methods look something like this:
+这些自定义方法看起来是这样的:
+
[[django_models]]
-.Django ORM with custom methods for domain model conversion (src/djangoproject/alloc/models.py)
+.Django ORM with custom methods for domain model conversion (src/djangoproject/alloc/models.py)(使用自定义方法进行领域模型转换的 Django ORM)
====
[source,python]
----
@@ -225,30 +250,39 @@ class OrderLine(models.Model):
you probably need an explicit try-get/except to handle the upsert.footnote:[
`@mr-bo-jangles` suggested you might be able to use https://oreil.ly/HTq1r[`update_or_create`],
but that's beyond our Django-fu.]
+对于值对象,`objects.get_or_create` 可以正常工作,但对于实体,你可能需要显式的 try-get/except 来处理 upsert(更新或插入)。脚注:
+`@mr-bo-jangles` 提出你或许可以使用 https://oreil.ly/HTq1r[`update_or_create`],但这超出了我们对 Django 的掌握范围。
<2> We've shown the most complex example here. If you do decide to do this,
be aware that there will be boilerplate! Thankfully it's not very
complex boilerplate.
+我们在这里展示了最复杂的示例。如果你决定这样做,请注意会有一些样板代码!不过值得庆幸的是,这些样板代码并不复杂。
<3> Relationships also need some careful, custom handling.
+关系也需要一些仔细而定制化的处理。
NOTE: As in <>, we use dependency inversion.
The ORM (Django) depends on the model and not the other way around.
((("Django", "Repository pattern with", startref="ix_DjangoRepo")))
((("Repository pattern", "with Django", startref="ix_RepoDjango")))
+与 <> 中一样,我们使用了依赖反转原则。
+ORM(Django)依赖于模型,而不是反过来。
=== Unit of Work Pattern with Django
+使用 Django 的工作单元模式
((("Django", "Unit of Work pattern with", id="ix_DjangoUoW")))
((("Unit of Work pattern", "with Django", id="ix_UoWDjango")))
The tests don't change too much:
+测试并没有发生太大的变化:
+
[[test_uow_django]]
-.Adapted UoW tests (tests/integration/test_uow.py)
+.Adapted UoW tests (tests/integration/test_uow.py)(适配后的工作单元测试)
====
[source,python]
----
@@ -290,9 +324,11 @@ def test_rolls_back_on_error():
<1> Because we had little helper functions in these tests, the actual
main bodies of the tests are pretty much the same as they were with
SQLAlchemy.
+由于我们在这些测试中使用了一些辅助函数,测试的主要主体部分实际上与使用 SQLAlchemy 时几乎相同。
<2> The `pytest-django` `mark.django_db(transaction=True)` is required to
test our custom transaction/rollback behaviors.
+为了测试我们自定义的事务/回滚行为,需要使用 `pytest-django` 的 `mark.django_db(transaction=True)`。
@@ -300,9 +336,11 @@ And the implementation is quite simple, although it took me a few
tries to find which invocation of Django's transaction magic
would work:
+实现相当简单,尽管我花了几次尝试才找到能够发挥作用的 Django 事务机制的调用方式:
+
[[start_uow_django]]
-.UoW adapted for Django (src/allocation/service_layer/unit_of_work.py)
+.UoW adapted for Django (src/allocation/service_layer/unit_of_work.py)(适配 Django 的工作单元)
====
[source,python]
----
@@ -329,8 +367,10 @@ class DjangoUnitOfWork(AbstractUnitOfWork):
<1> `set_autocommit(False)` was the best way to tell Django to stop
automatically committing each ORM operation immediately, and to
begin a transaction.
+`set_autocommit(False)` 是告诉 Django 停止立即自动提交每次 ORM 操作并开始一个事务的最佳方式。
<2> Then we use the explicit rollback and commits.
+然后我们使用显式的回滚和提交操作。
<3> One difficulty: because, unlike with SQLAlchemy, we're not
instrumenting the domain model instances themselves, the
@@ -339,10 +379,13 @@ class DjangoUnitOfWork(AbstractUnitOfWork):
update them back to the ORM.
((("Django", "Unit of Work pattern with", startref="ix_DjangoUoW")))
((("Unit of Work pattern", "with Django", startref="ix_UoWDjango")))
+一个难点是:与使用 SQLAlchemy 不同,我们并没有对领域模型实例本身进行操作,因此 `commit()` 命令需要显式地遍历每个仓储操作过的所有对象,
+并手动将它们更新回 ORM。
=== API: Django Views Are Adapters
+API:Django 视图是适配器
((("adapters", "Django views")))
((("views", "Django views as adapters")))
@@ -352,9 +395,12 @@ The Django _views.py_ file ends up being almost identical to the
old _flask_app.py_, because our architecture means it's a very
thin wrapper around our service layer (which didn't change at all, by the way):
+Django 的 _views.py_ 文件最终与之前的 _flask_app.py_ 几乎完全相同,
+因为我们的架构使其成为服务层的一个非常薄的封装(顺便说一下,服务层完全没有改变):
+
[[django_views]]
-.Flask app -> Django views (src/djangoproject/alloc/views.py)
+.Flask app -> Django views (src/djangoproject/alloc/views.py)(Flask 应用程序 -> Django 视图)
====
[source,python]
----
@@ -394,11 +440,14 @@ def allocate(request):
=== Why Was This All So Hard?
+为什么这一切都如此困难?
((("Django", "using, difficulty of")))
OK, it works, but it does feel like more effort than Flask/SQLAlchemy. Why is
that?
+好的,它可以工作,但确实感觉比 Flask/SQLAlchemy 更费力。为什么会这样呢?
+
The main reason at a low level is because Django's ORM doesn't work in the same
way. We don't have an equivalent of the SQLAlchemy classical mapper, so our
`ActiveRecord` and our domain model can't be the same object. Instead we have to
@@ -406,36 +455,56 @@ build a manual translation layer behind the repository. That's more
work (although once it's done, the ongoing maintenance burden shouldn't be too
high).
+从底层来看,主要原因是 Django 的 ORM 工作方式不同。我们没有与 SQLAlchemy 的经典映射器等价的功能,
+因此我们的 `ActiveRecord` 和领域模型不能是同一个对象。相反,我们必须在仓储后面构建一个手动的转换层。这确实需要更多的工作(不过一旦完成,
+后续的维护负担应该不会太高)。
+
((("pytest", "pytest-django plugin")))
Because Django is so tightly coupled to the database, you have to use helpers
like `pytest-django` and think carefully about test databases, right from
the very first line of code, in a way that we didn't have to when we started
out with our pure domain model.
+因为 Django 与数据库的耦合非常紧密,所以你必须使用类似 `pytest-django` 这样的辅助工具,并从第一行代码开始就仔细考虑测试数据库的设置,
+这是我们在使用纯领域模型开始时所不需要处理的。
+
But at a higher level, the entire reason that Django is so great
is that it's designed around the sweet spot of making it easy to build CRUD
apps with minimal boilerplate. But the entire thrust of our book is about
what to do when your app is no longer a simple CRUD app.
+但从更高的层面来看,Django 之所以如此出色,完全是因为它围绕着简化构建 CRUD 应用的最佳方式设计,且所需的样板代码极少。
+但我们这本书的核心讨论点是,当你的应用不再是一个简单的 CRUD 应用时,该怎么办。
+
At that point, Django starts hindering more than it helps. Things like the
Django admin, which are so awesome when you start out, become actively dangerous
if the whole point of your app is to build a complex set of rules and modeling
around the workflow of state changes. The Django admin bypasses all of that.
+此时,Django 帮助的作用开始被它带来的阻碍所抵消。像 Django Admin 这样的功能,在开始时非常出色,
+但如果你的应用的核心在于围绕状态变更的工作流构建一套复杂的规则和模型,那么它就会变得极其危险。因为 Django Admin 会绕过这些规则和逻辑。
+
=== What to Do If You Already Have Django
+如果你已经在使用 Django,该怎么办
((("Django", "applying patterns to Django app")))
So what should you do if you want to apply some of the patterns in this book
to a Django app? We'd say the following:
+那么,如果你想将本书中的一些模式应用到一个 Django 应用中,你应该怎么做呢?我们建议如下:
+
* The Repository and Unit of Work patterns are going to be quite a lot of work. The
main thing they will buy you in the short term is faster unit tests, so
evaluate whether that benefit feels worth it in your case. In the longer term, they
decouple your app from Django and the database, so if you anticipate wanting
to migrate away from either of those, Repository and UoW are a good idea.
+仓储模式和工作单元模式会带来相当多的工作量。从短期来看,它们主要为你带来的好处是更快的单元测试,因此你需要评估这种好处是否对你来说值得。
+从长期来看,它们会将你的应用程序与 Django 和数据库解耦,所以如果你预计可能需要从两者中的任何一个迁移开,
+使用仓储模式和工作单元模式是一个不错的选择。
* The Service Layer pattern might be of interest if you're seeing a lot of duplication in
your _views.py_. It can be a good way of thinking about your use cases separately from your web endpoints.
+如果你在 _views.py_ 文件中看到大量的代码重复,那么服务层模式可能会引起你的兴趣。它是一种将你的用例与 Web 端点分开思考的好方法。
* You can still theoretically do DDD and domain modeling with Django models,
tightly coupled as they are to the database; you may be slowed by
@@ -444,6 +513,9 @@ to a Django app? We'd say the following:
the _fat models_ approach: push as much logic down to your models as possible,
and apply patterns like Entity, Value Object, and Aggregate. However, see
the following caveat.
+理论上,即使 Django 模型与数据库紧密耦合,你仍然可以使用 DDD(领域驱动设计)和领域建模;虽然迁移过程可能会拖慢你的进度,但这不至于致命。
+所以只要你的应用程序不是太复杂,测试也不是太慢,你或许可以从 _胖模型_ 方法中获益:尽可能将逻辑下放到模型中,
+并应用如实体(Entity)、值对象(Value Object)和聚合(Aggregate)等模式。然而,请注意以下的注意事项。
With that said,
https://oreil.ly/Nbpjj[word
@@ -453,7 +525,12 @@ between apps. In those cases, there's a lot to be said for extracting out a
business logic or domain layer to sit between your views and forms and
your _models.py_, which you can then keep as minimal as possible.
+话虽如此,
+https://oreil.ly/Nbpjj[在 Django 社区的反馈] 表明,人们发现胖模型方法本身会遇到可扩展性问题,特别是在管理应用程序之间的相互依赖方面。
+在这些情况下,将业务逻辑或领域层提取出来,置于视图和表单与 _models.py_ 之间,有很多好处。而且,这也让你的 _models.py_ 可以尽量保持精简。
+
=== Steps Along the Way
+渐进式的步骤
((("Django", "applying patterns to Django app", "steps along the way")))
Suppose you're working on a Django project that you're not sure is going
@@ -461,27 +538,39 @@ to get complex enough to warrant the patterns we recommend, but you still
want to put a few steps in place to make your life easier, both in the medium
term and if you want to migrate to some of our patterns later. Consider the following:
+假设你正在开发一个 Django 项目,而你不确定该项目是否会变得足够复杂以至于需要使用我们推荐的模式,但你仍然希望采取一些步骤,
+使你的工作在中期更轻松一些,并且如果将来想迁移到我们的一些模式也会更方便。可以考虑以下建议:
+
* One piece of advice we've heard is to put a __logic.py__ into every Django app from day one. This gives you a place to put business logic, and to keep your
forms, views, and models free of business logic. It can become a stepping-stone
for moving to a fully decoupled domain model and/or service layer later.
+我们听过的一条建议是,从第一天开始就在每个 Django 应用中创建一个 __logic.py__ 文件。这为你提供了一个放置业务逻辑的地方,
+同时可以让你的表单、视图和模型中不包含业务逻辑。它可以成为将来迁移到完全解耦的领域模型和/或服务层的一个踏脚石。
* A business-logic layer might start out working with Django model objects and only later become fully decoupled from the framework and work on
plain Python data structures.
+业务逻辑层可能一开始是与 Django 模型对象一起工作的,而只有在之后才完全与框架解耦,转而使用纯粹的 _Python_ 数据结构。
[role="pagebreak-before"]
* For the read side, you can get some of the benefits of CQRS by putting reads
into one place, avoiding ORM calls sprinkled all over the place.
+在读取方面,你可以通过将读取操作集中到一个地方来获得一些 CQRS 的好处,避免 ORM 调用分散在各处。
* When separating out modules for reads and modules for domain logic, it
may be worth decoupling yourself from the Django apps hierarchy. Business
concerns will cut across them.
+当将读取模块和领域逻辑模块分离时,值得考虑让自己从 Django 的应用层次结构中解耦。业务需求通常会跨越这些应用模块。
NOTE: We'd like to give a shout-out to David Seddon and Ashia Zawaduk for
talking through some of the ideas in this appendix. They did their best to
stop us from saying anything really stupid about a topic we don't really
have enough personal experience of, but they may have failed.
+我们要向 David Seddon 和 Ashia Zawaduk 表示感谢,感谢他们与我们一起讨论了本附录中的一些想法。
+他们尽了最大的努力阻止我们在一个我们自己没有足够经验的话题上说出任何非常愚蠢的话,不过他们可能未能完全做到。
((("Django", startref="ix_Django")))
For more thoughts and actual lived experience dealing with existing
applications, refer to the <>.
+
+有关处理现有应用程序的更多想法和实际经验,请参阅 <>。
diff --git a/appendix_ds1_table.asciidoc b/appendix_ds1_table.asciidoc
index 0de6edbb..a2e8b6cc 100644
--- a/appendix_ds1_table.asciidoc
+++ b/appendix_ds1_table.asciidoc
@@ -1,59 +1,84 @@
[[appendix_ds1_table]]
[appendix]
== Summary Diagram and Table
+总结图表及表格
((("architecture, summary diagram and table", id="ix_archsumm")))
Here's what our architecture looks like by the end of the book:
+这是本书结尾时我们的架构图:
+
[[recap_diagram]]
image::images/apwp_aa01.png["diagram showing all components: flask+eventconsumer, service layer, adapters, domain etc"]
<> recaps each pattern and what it does.
+<> 总结了每种模式及其功能。
+
[[ds1_table]]
-.The components of our architecture and what they all do
+.The components of our architecture and what they all do(我们架构的各个组件及其功能)
[cols="1,1,2"]
|===
-| Layer | Component | Description
+| Layer(层级) | Component(组件) | Description(描述)
.5+a| *Domain*
+(*领域*)
__Defines the business logic.__
+(__定义业务逻辑。__)
-| Entity | A domain object whose attributes may change but that has a recognizable identity over time.
+| Entity(实体) | A domain object whose attributes may change but that has a recognizable identity over time.
+(一种领域对象,其属性可能会发生变化,但在一段时间内具有可识别的身份。)
-| Value object | An immutable domain object whose attributes entirely define it. It is fungible with other identical objects.
+| Value object(值对象) | An immutable domain object whose attributes entirely define it. It is fungible with other identical objects.
+(一个不可变的领域对象,其属性完全定义了自身。它可以与其他相同的对象互换。)
-| Aggregate | Cluster of associated objects that we treat as a unit for the purpose of data changes. Defines and enforces a consistency boundary.
+| Aggregate(聚合) | Cluster of associated objects that we treat as a unit for the purpose of data changes. Defines and enforces a consistency boundary.
+(关联对象的集合,为数据变更的目的将其视为一个整体。定义并强制执行一致性边界。)
-| Event | Represents something that happened.
+| Event(事件) | Represents something that happened.
+(表示已发生的某件事。)
-| Command | Represents a job the system should perform.
+| Command(命令) | Represents a job the system should perform.
+(表示系统应该执行的一项任务。)
-.3+a| *Service Layer*
+.3+a| *Service Layer*(*服务层*)
__Defines the jobs the system should perform and orchestrates different components.__
+(__定义系统应该执行的任务并协调不同的组件。__)
-| Handler | Receives a command or an event and performs what needs to happen.
-| Unit of work | Abstraction around data integrity. Each unit of work represents an atomic update. Makes repositories available. Tracks new events on retrieved aggregates.
-| Message bus (internal) | Handles commands and events by routing them to the appropriate handler.
+| Handler(处理器) | Receives a command or an event and performs what needs to happen.
+(接收命令或事件并执行需要完成的操作。)
+| Unit of work(工作单元) | Abstraction around data integrity. Each unit of work represents an atomic update. Makes repositories available. Tracks new events on retrieved aggregates.
+(围绕数据完整性的抽象。每个工作单元表示一次原子性更新。提供仓储支持。跟踪已检索聚合上的新事件。)
+| Message bus (internal)(消息总线(内部)) | Handles commands and events by routing them to the appropriate handler.
+(通过将命令和事件路由到适当的处理器进行处理。)
.2+a| *Adapters* (Secondary)
+(*适配器*(次级))
__Concrete implementations of an interface that goes from our system
to the outside world (I/O).__
+(__从我们的系统到外部世界(I/O)的接口的具体实现。__)
-| Repository | Abstraction around persistent storage. Each aggregate has its own repository.
-| Event publisher | Pushes events onto the external message bus.
+| Repository(仓储) | Abstraction around persistent storage. Each aggregate has its own repository.
+(围绕持久化存储的抽象。每个聚合都有其自己的仓储。)
+| Event publisher(事件发布器) | Pushes events onto the external message bus.
+(将事件推送到外部消息总线。)
.2+a| *Entrypoints* (Primary adapters)
+(*入口点*(主要适配器))
__Translate external inputs into calls into the service layer.__
+(__将外部输入转换为对服务层的调用。__)
| Web | Receives web requests and translates them into commands, passing them to the internal message bus.
-| Event consumer | Reads events from the external message bus and translates them into commands, passing them to the internal message bus.
+(接收 Web 请求并将其转换为命令,然后将其传递到内部消息总线。)
+| Event consumer(事件消费者) | Reads events from the external message bus and translates them into commands, passing them to the internal message bus.
+(从外部消息总线读取事件并将其转换为命令,然后传递到内部消息总线。)
-| N/A | External message bus (message broker) | A piece of infrastructure that different services use to intercommunicate, via events.
+| N/A | External message bus (message broker)(外部消息总线(消息代理)) | A piece of infrastructure that different services use to intercommunicate, via events.
+(一个基础设施,不同的服务通过事件使用它进行相互通信。)
|===
((("architecture, summary diagram and table", startref="ix_archsumm")))
diff --git a/appendix_project_structure.asciidoc b/appendix_project_structure.asciidoc
index df578be7..5a4c75cb 100644
--- a/appendix_project_structure.asciidoc
+++ b/appendix_project_structure.asciidoc
@@ -1,17 +1,22 @@
[[appendix_project_structure]]
[appendix]
== A Template Project Structure
+一个模板项目结构
((("projects", "template project structure", id="ix_prjstrct")))
Around <>, we moved from just having
everything in one folder to a more structured tree, and we thought it might
be of interest to outline the moving parts.
+在 <> 中,我们从将所有内容都放在一个文件夹中转向了更结构化的目录树。我们认为概述这些组成部分可能会让你感兴趣。
+
[TIP]
====
The code for this appendix is in the
appendix_project_structure branch https://oreil.ly/1rDRC[on GitHub]:
+本附录的代码位于 GitHub 上的 `appendix_project_structure` 分支 https://oreil.ly/1rDRC[见此处]:
+
----
git clone https://github.com/cosmicpython/code.git
cd code
@@ -22,6 +27,8 @@ git checkout appendix_project_structure
The basic folder structure looks like this:
+基本的文件夹结构如下所示:
+
[[project_tree]]
.Project tree
====
@@ -78,6 +85,10 @@ The basic folder structure looks like this:
different types of application code (e.g., Web API versus pub/sub client) usually
ends up being more trouble than it's worth; the cost in terms of complexity
and longer rebuild/CI times is too high. YMMV.]
+我们的 _docker-compose.yml_ 和 _Dockerfile_ 是运行我们应用程序的容器的主要配置文件,它们也可以用于运行测试(用于持续集成,CI)。
+一个更复杂的项目可能会有多个 Dockerfile,但我们发现,尽量减少镜像的数量通常是个好主意。脚注:分离生产与测试的镜像有时是个好主意,
+但我们倾向于认为,进一步尝试为不同类型的应用程序代码(例如,Web API 和发布/订阅客户端)分离不同镜像通常会得不偿失;
+这种做法在复杂性和较长的重建/CI 时间方面的成本太高。视情况而定(YMMV:Your Mileage May Vary)。
<2> A __Makefile__ provides the entrypoint for all the typical commands a developer
(or a CI server) might want to run during their normal workflow: `make
@@ -87,6 +98,10 @@ The basic folder structure looks like this:
`docker-compose` and `pytest` directly, but if nothing else, it's nice to
have all the "common commands" in a list somewhere, and unlike
documentation, a Makefile is code so it has less tendency to become out of date.
+一个 __Makefile__ 提供了所有典型命令的入口点,供开发者(或 CI 服务器)在日常工作流程中运行,比如 `make build`、`make test` 等等。
+脚注:[一个纯 _Python_ 的替代方案是 http://www.pyinvoke.org[Invoke],如果你团队中的每个人都熟悉 _Python_(或至少比熟悉 Bash 更熟悉 _Python_),值得一试!]
+这是可选的。你其实可以直接使用 `docker-compose` 和 `pytest`,但至少来说,把所有“常用命令”汇总在一个列表中是非常不错的。
+与文档不同,Makefile 是代码,因此不太容易过时。
<3> All the source code for our app, including the domain model, the
Flask app, and infrastructure code, lives in a Python package inside
@@ -95,38 +110,54 @@ The basic folder structure looks like this:
imports easy. Currently, the structure within this module is totally flat,
but for a more complex project, you'd expect to grow a folder hierarchy
that includes _domain_model/_, _infrastructure/_, _services/_, and _api/_.
+我们应用程序的所有源代码,包括领域模型、 _Flask_ 应用程序和基础设施代码,都放在 _src_ 文件夹内的一个 _Python_ 包中。脚注:
+关于 _src_ 文件夹的更多信息,请参考 Hynek Schlawack 的文章 https://hynek.me/articles/testing-packaging["Testing and Packaging"]。
+我们使用 `pip install -e` 和 _setup.py_ 文件来安装它,这使得导入变得简单。目前,这个模块内的结构是完全扁平的,但对于更复杂的项目,
+你可能需要发展出一个包含 _domain_model/_、_infrastructure/_、_services/_ 和 _api/_ 的文件夹层次结构。
<4> Tests live in their own folder. Subfolders distinguish different test
types and allow you to run them separately. We can keep shared fixtures
(_conftest.py_) in the main tests folder and nest more specific ones if we
wish. This is also the place to keep _pytest.ini_.
+测试代码存放在它们自己的文件夹中。子文件夹用于区分不同类型的测试,并允许单独运行它们。我们可以将共享的测试
+夹具(_conftest.py_)放在主测试文件夹中,如果需要,还可以嵌套更具体的测试夹具。同时,这也是存放 _pytest.ini_ 的地方。
TIP: The https://oreil.ly/QVb9Q[pytest docs] are really good on test layout and importability.
+https://oreil.ly/QVb9Q[pytest 文档] 在测试布局和可导入性方面非常出色。
Let's look at a few of these files and concepts in more detail.
+让我们更详细地看一下其中的一些文件和概念。
=== Env Vars, 12-Factor, and Config, Inside and Outside Containers
+环境变量、12-Factor原则和配置,在容器内外的使用
The basic problem we're trying to solve here is that we need different
config settings for the following:
+我们在这里试图解决的基本问题是,对于以下情况,我们需要不同的配置设置:
- Running code or tests directly from your own dev machine, perhaps
talking to mapped ports from Docker containers
+直接从你自己的开发机器运行代码或测试,可能需要与从 Docker 容器映射的端口进行通信。
- Running on the containers themselves, with "real" ports and hostnames
+在容器本身上运行,使用“真实”的端口和主机名。
- Different container environments (dev, staging, prod, and so on)
+不同的容器环境(开发、测试、生产等)。
Configuration through environment variables as suggested by the
https://12factor.net/config[12-factor manifesto] will solve this problem,
but concretely, how do we implement it in our code and our containers?
+通过环境变量进行配置(正如 https://12factor.net/config[12-factor 宣言] 所建议的)可以解决这一问题,
+但具体来说,我们如何在代码和容器中实现它呢?
+
=== Config.py
@@ -134,8 +165,10 @@ Whenever our application code needs access to some config, it's going to
get it from a file called __config.py__. Here are a couple of examples from our
app:
+每当我们的应用程序代码需要访问某些配置时,它将从一个名为 __config.py__ 的文件中获取。以下是我们应用程序中的一些示例:
+
[[config_dot_py]]
-.Sample config functions (src/allocation/config.py)
+.Sample config functions (src/allocation/config.py)(示例配置函数)
====
[source,python]
----
@@ -160,32 +193,43 @@ def get_api_url():
<1> We use functions for getting the current config, rather than constants
available at import time, because that allows client code to modify
`os.environ` if it needs to.
+我们使用函数来获取当前配置,而不是在导入时直接使用常量,因为这样可以让客户端代码在需要时修改 `os.environ`。
<2> _config.py_ also defines some default settings, designed to work when
running the code from the developer's local machine.footnote:[
This gives us a local development setup that "just works" (as much as possible).
You may prefer to fail hard on missing environment variables instead, particularly
if any of the defaults would be insecure in production.]
+_config.py_ 还定义了一些默认设置,这些设置旨在支持从开发者的本地机器运行代码时使用。脚注:
+这为我们提供了一个尽可能“开箱即用”的本地开发环境。但你可能更倾向于在缺失环境变量时直接失败,特别是如果任何默认值在生产环境中可能不够安全的话。
An elegant Python package called
https://github.com/hynek/environ-config[_environ-config_] is worth looking
at if you get tired of hand-rolling your own environment-based config functions.
+如果你厌倦了手动编写基于环境的配置函数,可以看看一个优雅的 _Python_ 包:https://github.com/hynek/environ-config[_environ-config_]。
+
TIP: Don't let this config module become a dumping ground that is full of things only vaguely related to config and that is then imported all over the place.
Keep things immutable and modify them only via environment variables.
If you decide to use a <>,
you can make it the only place (other than tests) that config is imported to.
+不要让这个配置模块变成一个四处堆满仅与配置稍有关系的东西的垃圾场,并且被到处导入。请保持配置的不可变性,仅通过环境变量对其进行修改。
+如果你决定使用一个 <>,可以让它成为唯一(除了测试之外)导入配置的地方。
=== Docker-Compose and Containers Config
+Docker-Compose 和容器配置
We use a lightweight Docker container orchestration tool called _docker-compose_.
It's main configuration is via a YAML file (sigh):footnote:[Harry is a bit YAML-weary.
It's _everywhere_, and yet he can never remember the syntax or how it's supposed
to indent.]
+我们使用了一种轻量级的 Docker 容器编排工具,称为 _docker-compose_。它的主要配置是通过一个 YAML 文件完成的(唉):脚注:
+Harry 对 YAML 有些厌倦了。它无处不在,但他总是记不住它的语法或正确的缩进方式。
+
[[docker_compose]]
-.docker-compose config file (docker-compose.yml)
+.docker-compose config file (docker-compose.yml)(docker-compose 配置文件)
====
[source,yaml]
----
@@ -224,29 +268,40 @@ services:
(containers) that we need for our app. Usually one main image
contains all our code, and we can use it to run our API, our tests,
or any other service that needs access to the domain model.
+在 _docker-compose_ 文件中,我们定义了应用程序所需的不同 _服务_(容器)。通常,一个主要镜像包含我们所有的代码,
+我们可以用它来运行 API、测试或任何其他需要访问领域模型的服务。
<2> You'll probably have other infrastructure services, including a database.
In production you might not use containers for this; you might have a cloud
provider instead, but _docker-compose_ gives us a way of producing a
similar service for dev or CI.
+你可能还会有其他基础设施服务,包括数据库。在生产环境中,你可能不会使用容器来运行这些服务,而是可能依赖云供应商,
+但 _docker-compose_ 为我们提供了一种方式,可以在开发或持续集成(CI)环境中生成类似的服务。
<3> The `environment` stanza lets you set the environment variables for your
containers, the hostnames and ports as seen from inside the Docker cluster.
If you have enough containers that information starts to be duplicated in
these sections, you can use `environment_file` instead. We usually call
ours _container.env_.
+`environment` 部分允许你为容器设置环境变量,以及在 Docker 集群内部看到的主机名和端口。如果你的容器数量足够多,
+导致这些信息在这些部分中开始被重复使用,那么可以改用 `environment_file`。我们通常将其命名为 _container.env_。
<4> Inside a cluster, _docker-compose_ sets up networking such that containers are
available to each other via hostnames named after their service name.
+在集群内部,_docker-compose_ 设置了网络,使得容器可以通过以其服务名称命名的主机名彼此访问。
<5> Pro tip: if you're mounting volumes to share source folders between your
local dev machine and the container, the `PYTHONDONTWRITEBYTECODE` environment variable
tells Python to not write _.pyc_ files, and that will save you from
having millions of root-owned files sprinkled all over your local filesystem,
being all annoying to delete and causing weird Python compiler errors besides.
+专业提示:如果你正在挂载卷以在本地开发机器与容器之间共享源文件夹,可以设置 `PYTHONDONTWRITEBYTECODE` 环境变量,
+告诉 _Python_ 不要生成 _.pyc_ 文件。这将帮助你避免在本地文件系统中散布大量由 root 拥有的文件,这些文件不仅令人烦恼难以删除,
+还可能导致奇怪的 _Python_ 编译错误。
<6> Mounting our source and test code as `volumes` means we don't need to rebuild
our containers every time we make a code change.
+将我们的源代码和测试代码挂载为 `volumes` 意味着每次更改代码时,我们不需要重新构建容器。
<7> The `ports` section allows us to expose the ports from inside the containers
to the outside worldfootnote:[On a CI server, you may not be able to expose
@@ -254,19 +309,26 @@ services:
can find ways of making these port mappings optional (e.g., with
_docker-compose.override.yml_).]—these correspond to the default ports we set
in _config.py_.
+`ports` 部分允许我们将容器内部的端口暴露给外部世界。脚注:
+在 CI 服务器上,你可能无法可靠地暴露任意端口,但这仅是为了本地开发的便利。你可以找到方法使这些端口映射成为可选的
+(例如,使用 _docker-compose.override.yml_)。这些端口与我们在 _config.py_ 中设置的默认端口相对应。
NOTE: Inside Docker, other containers are available through hostnames named after
their service name. Outside Docker, they are available on `localhost`, at the
port defined in the `ports` section.
+在 Docker 内部,可以通过以服务名称命名的主机名访问其他容器。在 Docker 外部,可以通过 `localhost` 访问它们,端口由 `ports` 部分定义。
=== Installing Your Source as a Package
+将源代码安装为一个包
All our application code (everything except tests, really) lives inside an
_src_ folder:
+我们所有的应用程序代码(实际上除了测试以外的所有内容)都放在一个 _src_ 文件夹中:
+
[[src_folder_tree]]
-.The src folder
+.The src folder(src 文件夹)
====
[source,text]
[role="skip"]
@@ -280,11 +342,13 @@ _src_ folder:
====
<1> Subfolders define top-level module names. You can have multiple if you like.
+子文件夹定义了顶级模块名称。如果你需要,可以有多个。
<2> And _setup.py_ is the file you need to make it pip-installable, shown next.
+而 _setup.py_ 是让其支持通过 pip 安装所需的文件,如下所示。
[[setup_dot_py]]
-.pip-installable modules in three lines (src/setup.py)
+.pip-installable modules in three lines (src/setup.py)(用三行代码实现可通过 pip 安装的模块)
====
[source,python]
----
@@ -302,14 +366,20 @@ it's required. For a package that's never actually going to hit PyPI, it'll
do fine.footnote:[For more _setup.py_ tips, see
https://oreil.ly/KMWDz[this article on packaging] by Hynek.]
+这就是你所需的一切。`packages=` 指定你希望安装为顶级模块的子文件夹名称。`name` 条目只是一个装饰性选项,但它是必需的。
+对于一个永远不会真正发布到 PyPI 的包来说,这样已经足够了。脚注:
+有关更多 _setup.py_ 技巧,请参阅 Hynek 的这篇文章: https://oreil.ly/KMWDz[关于打包的文章]。
+
=== Dockerfile
Dockerfiles are going to be very project-specific, but here are a few key stages
you'll expect to see:
+Dockerfile 将会非常依赖具体项目,但以下是你可能会看到的一些关键阶段:
+
[[dockerfile]]
-.Our Dockerfile (Dockerfile)
+.Our Dockerfile (Dockerfile)(我们的 Dockerfile)
====
[source,dockerfile]
----
@@ -336,25 +406,34 @@ CMD flask run --host=0.0.0.0 --port=80
====
<1> Installing system-level dependencies
+安装系统级依赖项
<2> Installing our Python dependencies (you may want to split out your dev from
prod dependencies; we haven't here, for simplicity)
+安装我们的 _Python_ 依赖项(你可能希望将开发依赖和生产依赖分开;为了简单起见,我们在这里没有这样做)
<3> Copying and installing our source
+复制并安装我们的源代码
<4> Optionally configuring a default startup command (you'll probably override
this a lot from the command line)
+可选地配置一个默认的启动命令(你可能会经常从命令行覆盖它)。
TIP: One thing to note is that we install things in the order of how frequently they
are likely to change. This allows us to maximize Docker build cache reuse. I
can't tell you how much pain and frustration underlies this lesson. For this
and many more Python Dockerfile improvement tips, check out
https://pythonspeed.com/docker["Production-Ready Docker Packaging"].
+需要注意的一点是,我们按照更改频率的顺序安装内容。这样可以最大化 Docker 构建缓存的重用。我无法形容这个教训背后蕴含了多少痛苦和挫折。
+有关这一点以及更多关于改进 _Python_ Dockerfile 的技巧,请查看: https://pythonspeed.com/docker["生产就绪的 Docker 打包"]。
=== Tests
+测试
((("testing", "tests folder tree")))
Our tests are kept alongside everything else, as shown here:
+我们的测试代码与其他内容一起存放,如下所示:
+
[[tests_folder]]
-.Tests folder tree
+.Tests folder tree(测试文件夹结构树)
====
[source,text]
[role="tree"]
@@ -378,20 +457,34 @@ Nothing particularly clever here, just some separation of different test types
that you're likely to want to run separately, and some files for common fixtures,
config, and so on.
+这里并没有什么特别的巧妙之处,只是对可能需要单独运行的不同类型测试进行了分类,并提供了一些文件用于共享的夹具、配置等。
+
There's no _src_ folder or _setup.py_ in the test folders because we usually
haven't needed to make tests pip-installable, but if you have difficulties with
import paths, you might find it helps.
+测试文件夹中没有 _src_ 文件夹或 _setup.py_,因为我们通常不需要让测试代码支持通过 pip 安装。
+但如果你在导入路径方面遇到困难,这可能会有所帮助。
+
=== Wrap-Up
+总结
These are our basic building blocks:
+以下是我们的基本构建块:
+
* Source code in an _src_ folder, pip-installable using _setup.py_
+源代码存放在 _src_ 文件夹中,可通过 _setup.py_ 进行 pip 安装。
* Some Docker config for spinning up a local cluster that mirrors production as far as possible
+一些 Docker 配置,用于启动尽可能接近生产环境的本地集群。
* Configuration via environment variables, centralized in a Python file called _config.py_, with defaults allowing things to run _outside_ containers
+通过环境变量进行配置,集中在一个名为 _config.py_ 的 Python 文件中,并带有默认值,允许在容器 _外部_ 运行代码。
* A Makefile for useful command-line, um, commands
+一个用于便捷命令行操作的 Makefile
((("projects", "template project structure", startref="ix_prjstrct")))
We doubt that anyone will end up with _exactly_ the same solutions we did, but we hope you
find some inspiration here.
+
+我们怀疑是否会有人最终采用与我们 _完全_ 相同的解决方案,但我们希望你能从中获得一些灵感。
diff --git a/appendix_validation.asciidoc b/appendix_validation.asciidoc
index 6fd2eb4c..a2f8a155 100644
--- a/appendix_validation.asciidoc
+++ b/appendix_validation.asciidoc
@@ -1,31 +1,46 @@
[[appendix_validation]]
[appendix]
== Validation
+校验
((("validation", id="ix_valid")))
Whenever we're teaching and talking about these techniques, one question that
comes up over and over is "Where should I do validation? Does that belong with
my business logic in the domain model, or is that an infrastructural concern?"
+每当我们教授和讨论这些技术时,一个反复出现的问题是:“我应该在哪里进行校验?这是属于领域模型中的业务逻辑,还是属于基础设施相关的问题?”
+
As with any architectural question, the answer is: it depends!
+和其他任何架构问题一样,答案是:视情况而定!
+
The most important consideration is that we want to keep our code well separated
so that each part of the system is simple. We don't want to clutter our code
with irrelevant detail.
+最重要的考量是我们希望代码保持良好的分离,使系统的每个部分都简洁明了。我们不希望代码中充满无关的细节。
+
=== What Is Validation, Anyway?
+到底什么是校验?
When people use the word _validation_, they usually mean a process whereby they
test the inputs of an operation to make sure that they match certain criteria.
Inputs that match the criteria are considered _valid_, and inputs that don't
are _invalid_.
+当人们使用“_校验_”这个词时,通常指的是一种过程,通过该过程测试操作的输入内容,以确保它们符合某些标准。
+符合标准的输入被视为 _有效_,而不符合的则为 _无效_。
+
If the input is invalid, the operation can't continue but should exit with
some kind of error. In other words, validation is about creating _preconditions_. We find it useful
to separate our preconditions into three subtypes: syntax, semantics, and
pragmatics.
+如果输入是无效的,则操作无法继续,应以某种错误退出。换句话说,校验是关于创建 _前置条件_。
+我们认为将前置条件划分为三种子类型是有用的:语法、语义和实际应用。
+
=== Validating Syntax
+语法校验
In linguistics, the _syntax_ of a language is the set of rules that govern the
structure of grammatical sentences. For example, in English, the sentence
@@ -33,26 +48,41 @@ structure of grammatical sentences. For example, in English, the sentence
sound, while the phrase "hat hat hat hat hat hat wibble" is not. We can describe
grammatically correct sentences as _well formed_.
+在语言学中,语言的 _语法(syntax)_ 是指控制语法句子结构的一组规则。例如,在英语中,句子“将三件`TASTELESS-LAMP`分配到订单27”在语法上是正确的,
+而短语“hat hat hat hat hat hat wibble”则不是。我们可以将语法正确的句子描述为 _结构良好(well formed)_ 的。
+
[role="pagebreak-before"]
How does this map to our application? Here are some examples of syntactic rules:
+这怎么映射到我们的应用程序呢?以下是一些语法规则的示例:
+
* An `Allocate` command must have an order ID, a SKU, and a quantity.
+一个 `Allocate` 命令必须包含订单ID、SKU和数量。
* A quantity is a positive integer.
+数量必须是一个正整数。
* A SKU is a string.
+SKU 必须是一个字符串。
These are rules about the shape and structure of incoming data. An `Allocate`
command without a SKU or an order ID isn't a valid message. It's the equivalent
of the phrase "Allocate three to."
+这些是关于传入数据形状和结构的规则。一个缺少 SKU 或订单 ID 的 `Allocate` 命令不是一个有效的消息。
+这相当于短语“Allocate three to.”
+
We tend to validate these rules at the edge of the system. Our rule of thumb is
that a message handler should always receive only a message that is well-formed
and contains all required information.
+我们倾向于在系统边界进行规则校验。我们的经验法则是,消息处理程序应该只接收格式规范且包含所有必需信息的消息。
+
One option is to put your validation logic on the message type itself:
+一种选择是将校验逻辑放在消息类型本身上:
+
[[validation_on_message]]
-.Validation on the message class (src/allocation/commands.py)
+.Validation on the message class (src/allocation/commands.py)(消息类上的校验)
====
[source,python]
----
@@ -83,9 +113,11 @@ class Allocate(Command):
<1> The https://pypi.org/project/schema[++schema++ library] lets us
describe the structure and validation of our messages in a nice declarative way.
+https://pypi.org/project/schema[++schema++库] 让我们能够以一种不错的声明式方式描述消息的结构和校验。
<2> The `from_json` method reads a string as JSON and turns it into our message
type.
+`from_json` 方法将字符串作为 JSON 读取,并将其转换为我们的消息类型。
// IDEA hynek didn't like the inline call to json.loads
@@ -93,9 +125,11 @@ This can get repetitive, though, since we need to specify our fields twice,
so we might want to introduce a helper library that can unify the validation and
declaration of our message types:
+不过,这可能会变得重复,因为我们需要两次指定字段,因此我们可能想引入一个辅助库来统一消息类型的校验和声明:
+
[[command_factory]]
-.A command factory with schema (src/allocation/commands.py)
+.A command factory with schema (src/allocation/commands.py)(带有模式的命令工厂)
====
[source,python]
----
@@ -127,15 +161,22 @@ AddStock = command(
<1> The `command` function takes a message name, plus kwargs for the fields of
the message payload, where the name of the kwarg is the name of the field and
the value is the parser.
+`command` 函数接受一个消息名称以及消息负载字段的关键字参数 (kwargs),其中关键字参数的名称是字段名称,值是解析器。
<2> We use the `make_dataclass` function from the dataclass module to dynamically
create our message type.
+我们使用 `dataclass` 模块中的 `make_dataclass` 函数来动态创建消息类型。
<3> We patch the `from_json` method onto our dynamic dataclass.
+我们将 `from_json` 方法附加到动态数据类上。
<4> We can create reusable parsers for quantity, SKU, and so on to keep things DRY.
+我们可以为数量、SKU 等创建可重用的解析器,以保持代码的简洁和复用性(DRY原则)。
<5> Declaring a message type becomes a one-liner.
+声明一种消息类型就变成了一行代码。
This comes at the expense of losing the types on your dataclass, so bear that
trade-off in mind.
+这样做的代价是会丢失数据类上的类型,因此请记住这种权衡。
+
// (EJ2) I understand this code, but find it to be a little bit gross, since
// there are many alternatives that combine schema validation, object serialization
// + deserialization, and class type definitions for you. Examples here: https://github.com/voidfiles/python-serialization-benchmark
@@ -144,6 +185,7 @@ trade-off in mind.
=== Postel's Law and the Tolerant Reader Pattern
+伯斯塔尔法则与宽容读取者模式
_Postel's law_, or the _robustness principle_, tells us, "Be liberal in what you
accept, and conservative in what you emit." We think this applies particularly
@@ -151,28 +193,45 @@ well in the context of integration with our other systems. The idea here is
that we should be strict whenever we're sending messages to other systems, but
as lenient as possible when we're receiving messages from others.
+_伯斯塔尔法则_,又称 _稳健性原则_,告诉我们:“在接收时尽可能宽松,在输出时尽可能保守。”我们认为这一原则在与其他系统集成的上下文中特别适用。
+这一思想是指,在向其他系统发送消息时,我们应该尽可能严格,而在接收其他系统的消息时,则尽可能宽容。
+
For example, our system _could_ validate the format of a SKU. We've been using
made-up SKUs like `UNFORGIVING-CUSHION` and `MISBEGOTTEN-POUFFE`. These follow
a simple pattern: two words, separated by dashes, where the second word is the
type of product and the first word is an adjective.
+例如,我们的系统 _可以_ 校验 SKU 的格式。我们一直在使用虚构的 SKU,比如 `UNFORGIVING-CUSHION` 和 `MISBEGOTTEN-POUFFE`。
+这些遵循一个简单的模式:由两个单词组成,单词之间用连字符分隔,其中第二个单词是产品类型,第一个单词是形容词。
+
Developers _love_ to validate this kind of thing in their messages, and reject
anything that looks like an invalid SKU. This causes horrible problems down the
line when some anarchist releases a product named `COMFY-CHAISE-LONGUE` or when
a snafu at the supplier results in a shipment of `CHEAP-CARPET-2`.
+开发人员 _非常热衷_ 于在消息中校验这样的内容,并拒绝任何看起来像无效 SKU 的数据。然而,这会在后续引发可怕的问题,
+比如某个特立独行的人发布了一款名为 `COMFY-CHAISE-LONGUE` 的产品,或者供应商的一次失误导致一批货物使用了 `CHEAP-CARPET-2` 这样的 SKU。
+
Really, as the allocation system, it's _none of our business_ what the format of
a SKU might be. All we need is an identifier, so we can simply describe it as a
string. This means that the procurement system can change the format whenever
they like, and we won't care.
+实际上,作为分配系统,SKU 的格式究竟是什么根本 _不关我们的事_。我们所需要的只是一个标识符,因此我们可以简单地将其描述为一个字符串。
+这意味着采购系统可以随时更改格式,而我们完全不用关心。
+
This same principle applies to order numbers, customer phone numbers, and much
more. For the most part, we can ignore the internal structure of strings.
+这一原则同样适用于订单号、客户电话号码等等。大多数情况下,我们可以忽略字符串的内部结构。
+
Similarly, developers _love_ to validate incoming messages with tools like JSON
Schema, or to build libraries that validate incoming messages and share them
among systems. This likewise fails the robustness test.
+同样地,开发人员 _非常热衷_ 使用诸如 JSON Schema 之类的工具校验传入消息,或构建用于校验传入消息的库并在系统之间共享。
+然而,这同样无法通过稳健性测试。
+
// (EJ3) This reads like it's saying that JSON-Schema is bad (which is a separate discussion, I think.)
//
// If I understand correctly, the issue is that JSON-Schema allows you to specify
@@ -184,38 +243,55 @@ Let's imagine, for example, that the procurement system adds new fields to the
`ChangeBatchQuantity` message that record the reason for the change and the
email of the user responsible for the change.
+举个例子,假设采购系统在 `ChangeBatchQuantity` 消息中新增了字段,用于记录更改的原因以及负责更改的用户的电子邮件地址。
+
Since these fields don't matter to the allocation service, we should simply
ignore them. We can do that in the `schema` library by passing the keyword arg
`ignore_extra_keys=True`.
+由于这些字段与分配服务无关,我们应该直接忽略它们。我们可以在 `schema` 库中通过传递关键字参数 `ignore_extra_keys=True` 来实现这一点。
+
This pattern, whereby we extract only the fields we care about and do minimal
validation of them, is the Tolerant Reader pattern.
+这种模式,即我们只提取关心的字段并对其进行最少的校验,称为宽容读取者模式(Tolerant Reader Pattern)。
+
TIP: Validate as little as possible. Read only the fields you need, and don't
overspecify their contents. This will help your system stay robust when other
systems change over time. Resist the temptation to share message
definitions between systems: instead, make it easy to define the data you
depend on. For more info, see Martin Fowler's article on the
https://oreil.ly/YL_La[Tolerant Reader pattern].
+尽可能少地进行校验。只读取你需要的字段,不要过度指定它们的内容。当其他系统随着时间发生变化时,这将有助于保持你的系统稳健。
+抗拒在系统之间共享消息定义的诱惑:相反,要使定义你所依赖的数据变得容易。有关更多信息,
+请参阅 Martin Fowler 关于 https://oreil.ly/YL_La[宽容读取者模式] 的文章。
[role="pagebreak-before less_space"]
-.Is Postel Always Right?
+.Is Postel Always Right?(伯斯塔尔(Postel)总是对的吗?)
*******************************************************************************
Mentioning Postel can be quite triggering to some people. They will
https://oreil.ly/bzLmb[tell you]
that Postel is the precise reason that everything on the internet is broken and
we can't have nice things. Ask Hynek about SSLv3 one day.
+提到伯斯塔尔(Postel)可能对某些人来说是一个相当敏感的话题。他们会 https://oreil.ly/bzLmb[告诉你],Postel 恰恰是导致互联网上一切问题的原因,
+也是我们无法拥有美好事物的根源。哪天可以问问 Hynek 关于 SSLv3 的事情。
+
We like the Tolerant Reader approach in the particular context of event-based
integration between services that we control, because it allows for independent
evolution of those services.
+我们喜欢在我们控制的服务之间进行基于事件的集成时采用宽容读取器(Tolerant Reader)的方法,因为它允许这些服务独立演化。
+
If you're in charge of an API that's open to the public on the big bad
internet, there might be good reasons to be more conservative about what
inputs you allow.
+
+如果你负责管理一个在充满挑战的互联网环境中向公众公开的 API,那么可能有充分的理由更保守地限制你允许的输入。
*******************************************************************************
=== Validating at the Edge
+在边界处进行校验
// (EJ2) IMO "Smart Edges, Dumb Pipes" is a useful another useful idiom to keep
// validation straight.
@@ -229,18 +305,28 @@ domain model or use-case handlers see them. This helps our code stay clean
and maintainable over the long term. We sometimes refer to this as _validating
at the edge of the system_.
+早些时候,我们提到要避免在代码中掺杂无关的细节。特别是,我们不想在领域模型内部进行防御性编程。相反,
+我们希望确保在领域模型或用例处理程序看到请求之前,这些请求就已经被确认是有效的。这有助于我们的代码在长期内保持整洁和可维护性。
+我们有时称之为 _在系统边界进行校验_ 。
+
In addition to keeping your code clean and free of endless checks and asserts,
bear in mind that invalid data wandering through your system is a time bomb;
the deeper it gets, the more damage it can do, and the fewer tools
you have to respond to it.
+除了让你的代码保持干净并避免无穷无尽的检查和断言之外,请牢记,无效数据在系统中游走就像一颗定时炸弹;它深入得越深,可能造成的破坏就越大,
+而你能够用来应对它的工具就越少。
+
Back in <>, we said that the message bus was a great place to put
cross-cutting concerns, and validation is a perfect example of that. Here's how
we might change our bus to perform validation for us:
+回到<>,我们提到消息总线是放置跨领域关注点的绝佳位置,而校验正是一个很好的示例。
+以下是我们如何修改消息总线来为我们执行校验的方式:
+
[[validation_on_bus]]
-.Validation
+.Validation(校验)
====
[source,python]
----
@@ -269,9 +355,11 @@ class MessageBus:
Here's how we might use that method from our Flask API endpoint:
+以下是我们可能在 Flask API 端点中使用该方法的方式:
+
[[validation_bubbles_up]]
-.API bubbles up validation errors (src/allocation/flask_app.py)
+.API bubbles up validation errors (src/allocation/flask_app.py)(API 会抛出校验错误)
====
[source,python]
----
@@ -291,8 +379,10 @@ def bad_request(e: ValidationError):
And here's how we might plug it in to our asynchronous message processor:
+以下是我们可能将其集成到异步消息处理器中的方式:
+
[[validation_pubsub]]
-.Validation errors when handling Redis messages (src/allocation/redis_pubsub.py)
+.Validation errors when handling Redis messages (src/allocation/redis_pubsub.py)(处理 Redis 消息时的校验错误)
====
[source,python]
----
@@ -311,27 +401,38 @@ the outside world and how to report success or failure. Our message bus takes
care of validating our requests and routing them to the correct handler, and
our handlers are exclusively focused on the logic of our use case.
+请注意,我们的入口点只关注如何从外界获取消息以及如何报告成功或失败。我们的消息总线负责校验请求并将其路由到正确的处理程序,
+而我们的处理程序则专注于用例逻辑本身。
+
TIP: When you receive an invalid message, there's usually little you can do but
log the error and continue. At MADE we use metrics to count the number of
messages a system receives, and how many of those are successfully
processed, skipped, or invalid. Our monitoring tools will alert us if we
see spikes in the numbers of bad messages.
+当你收到无效消息时,通常除了记录错误并继续运行外,你几乎无能为力。在 MADE,我们使用指标来统计系统接收到的消息数量,
+以及其中成功处理、被跳过或无效的消息数量。如果我们发现无效消息数量激增,我们的监控工具会向我们发出警报。
=== Validating Semantics
+语义校验
While syntax is concerned with the structure of messages, _semantics_ is the study
of _meaning_ in messages. The sentence "Undo no dogs from ellipsis four" is
syntactically valid and has the same structure as the sentence "Allocate one
teapot to order five,"" but it is meaningless.
+语法关注的是消息的结构,而 _语义_ 则研究消息的 _含义_。句子“Undo no dogs from ellipsis four”(撤销不从省略号四中取走狗)在语法上是有效的,
+并且它与句子“Allocate one teapot to order five”(为订单五分配一个茶壶)的结构相同,但它却毫无意义。
+
We can read this JSON blob as an `Allocate` command but can't successfully
execute it, because it's _nonsense_:
+我们可以将这个 JSON 数据块解读为一个 `Allocate` 命令,但无法成功执行它,因为它是 _无意义的_:
+
[[invalid_order]]
-.A meaningless message
+.A meaningless message(一个无意义的消息)
====
[source,python]
----
@@ -346,9 +447,11 @@ execute it, because it's _nonsense_:
We tend to validate semantic concerns at the message-handler layer with a kind
of contract-based programming:
+我们倾向于在消息处理程序层使用一种基于契约的编程方式来校验语义相关的问题:
+
[[ensure_dot_py]]
-.Preconditions (src/allocation/ensure.py)
+.Preconditions (src/allocation/ensure.py)(前置条件)
====
[source,python]
----
@@ -379,17 +482,22 @@ def product_exists(event, uow): #<3>
====
<1> We use a common base class for errors that mean a message is invalid.
+我们使用一个通用的错误基类来表示消息无效。
<2> Using a specific error type for this problem makes it easier to report on
and handle the error. For example, it's easy to map `ProductNotFound` to a 404
in Flask.
+为这个问题使用特定的错误类型使得报告和处理该错误更加容易。例如,在 Flask 中将 `ProductNotFound` 映射为 404 是很简单的。
<3> `product_exists` is a precondition. If the condition is `False`, we raise an
error.
+`product_exists` 是一个前置条件。如果条件为 `False`,我们就会抛出一个错误。
This keeps the main flow of our logic in the service layer clean and declarative:
+这使得服务层中的主要逻辑流程保持干净且具描述性:
+
[[ensure_in_services]]
-.Ensure calls in services (src/allocation/services.py)
+.Ensure calls in services (src/allocation/services.py)(在服务中确保调用)
====
[source,python,highlight=8]
----
@@ -413,11 +521,15 @@ We can extend this technique to make sure that we apply messages idempotently.
For example, we want to make sure that we don't insert a batch of stock more
than once.
+我们可以扩展此技术,以确保消息以幂等的方式被应用。例如,我们希望确保不会多次插入同一批库存。
+
If we get asked to create a batch that already exists, we'll log a warning and
continue to the next message:
+如果我们被要求创建一个已存在的批次,我们会记录一条警告并继续处理下一个消息:
+
[[skipmessage]]
-.Raise SkipMessage exception for ignorable events (src/allocation/services.py)
+.Raise SkipMessage exception for ignorable events (src/allocation/services.py)(为可忽略事件引发 SkipMessage 异常)
====
[source,python]
----
@@ -441,8 +553,10 @@ def batch_is_new(self, event, uow):
Introducing a `SkipMessage` exception lets us handle these cases in a generic
way in our message bus:
+引入一个 `SkipMessage` 异常使我们可以在消息总线中以通用的方式处理这些情况:
+
[[skip_in_bus]]
-.The bus now knows how to skip (src/allocation/messagebus.py)
+.The bus now knows how to skip (src/allocation/messagebus.py)(消息总线现在知道如何跳过)
====
[source,python]
----
@@ -461,11 +575,17 @@ There are a couple of pitfalls to be aware of here. First, we need to be sure
that we're using the same UoW that we use for the main logic of our
use case. Otherwise, we open ourselves to irritating concurrency bugs.
+在这里需要注意一些陷阱。首先,我们需要确保使用与用例主要逻辑相同的工作单元。否则,我们可能会遇到恼人的并发错误。
+
Second, we should try to avoid putting _all_ our business logic into these
precondition checks. As a rule of thumb, if a rule _can_ be tested inside our
domain model, then it _should_ be tested in the domain model.
+其次,我们应尽量避免将 _所有_ 业务逻辑都放入这些前置条件检查中。一个经验法则是,如果某条规则 _可以_ 在领域模型中被测试,
+那么它 _应该_ 在领域模型中进行测试。
+
=== Validating Pragmatics
+语用性校验
_Pragmatics_ is the study of how we understand language in context. After we have
parsed a message and grasped its meaning, we still need to process it in
@@ -474,36 +594,47 @@ this is very brave," it may mean that the reviewer admires your courage—unless
they're British, in which case, they're trying to tell you that what you're doing
is insanely risky, and only a fool would attempt it. Context is everything.
+_语用学_ 研究的是我们如何在上下文中理解语言。在解析消息并理解其含义后,我们仍需要在上下文中处理它。例如,
+如果你在一个拉取请求中收到评论说:“我认为这非常勇敢,”可能意味着评论者钦佩你的勇气——除非他们是英国人,那样的话,
+他们其实是在告诉你你正在做的事情极具风险,只有傻瓜才会尝试。上下文是一切的关键。
+
[role="nobreakinside less_space"]
-.Validation Recap
+.Validation Recap(校验回顾)
*****************************************************************
-Validation means different things to different people::
+Validation means different things to different people(校验对不同的人来说意味着不同的事情)::
When talking about validation, make sure you're clear about what you're
validating.
We find it useful to think about syntax, semantics, and pragmatics: the
structure of messages, the meaningfulness of messages, and the business
logic governing our response to messages.
+当谈到校验时,请确保你明确知道要校验的内容。
+我们发现将校验分为语法、语义和语用这三个方面是很有帮助的:消息的结构、消息的意义以及控制我们对消息响应的业务逻辑。
-Validate at the edge when possible::
+Validate at the edge when possible(尽可能在边界处进行校验)::
Validating required fields and the permissible ranges of numbers is _boring_,
and we want to keep it out of our nice clean codebase. Handlers should always
receive only valid messages.
+校验必填字段和数字的允许范围是 _枯燥的_,我们希望将这些内容排除在优雅干净的代码库之外。处理程序应始终只接收有效的消息。
-Only validate what you require::
+Only validate what you require(只校验你所需要的内容)::
Use the Tolerant Reader pattern: read only the fields your application needs
and don't overspecify their internal structure. Treating fields as opaque
strings buys you a lot of flexibility.
+使用宽容读取器(Tolerant Reader)模式:只读取你的应用程序需要的字段,不要对它们的内部结构做过多规范化。
+将字段视为不透明的字符串可以为你带来很大的灵活性。
-Spend time writing helpers for validation::
+Spend time writing helpers for validation(花时间编写校验辅助函数)::
Having a nice declarative way to validate incoming messages and apply
preconditions to your handlers will make your codebase much cleaner.
It's worth investing time to make boring code easy to maintain.
+采用一种优雅的声明式方式来校验传入消息并为处理程序应用前置条件,将使你的代码库更加干净。花时间让枯燥的代码易于维护是值得的。
-Locate each of the three types of validation in the right place::
+Locate each of the three types of validation in the right place(在合适的位置放置这三种类型的校验)::
Validating syntax can happen on message classes, validating
semantics can happen in the service layer or on the message bus,
and validating pragmatics belongs in the domain model.
+语法校验可以在消息类上进行,语义校验可以在服务层或消息总线上进行,而语用校验则属于领域模型。
*****************************************************************
@@ -512,6 +643,7 @@ TIP: Once you've validated the syntax and semantics of your commands
at the edges of your system, the domain is the place for the rest
of your validation. Validation of pragmatics is often a core part
of your business rules.
+一旦你在系统边界校验了命令的语法和语义,其余的校验就属于领域模型了。语用校验通常是你的业务规则的核心部分。
In software terms, the pragmatics of an operation are usually managed by the
@@ -520,3 +652,6 @@ domain model. When we receive a message like "allocate three million units of
_semantically_ valid, but we're unable to comply because we don't have the stock
available.
((("validation", startref="ix_valid")))
+
+在软件领域中,一个操作的语用性通常由领域模型来管理。当我们接收到类似“为订单76543分配三百万单位的`SCARCE-CLOCK`”这样的消息时,
+该消息在 _语法上_ 是有效的,_语义上_ 也是有效的,但我们无法执行,因为我们没有足够的库存。
diff --git a/chapter_01_domain_model.asciidoc b/chapter_01_domain_model.asciidoc
index 194a8526..d7c0ee2b 100644
--- a/chapter_01_domain_model.asciidoc
+++ b/chapter_01_domain_model.asciidoc
@@ -1,5 +1,6 @@
[[chapter_01_domain_model]]
== Domain Modeling
+领域模型
((("domain modeling", id="ix_dommod")))
((("domain driven design (DDD)", seealso="domain model; domain modeling")))
@@ -8,17 +9,25 @@ that's highly compatible with TDD. We'll discuss _why_ domain modeling
matters, and we'll look at a few key patterns for modeling domains: Entity,
Value Object, and Domain Service.
+本章将探讨如何通过代码对业务流程进行建模,并使其与TDD高度兼容。
+我们将讨论领域建模的重要性(_why_),并研究一些领域建模的关键模式:实体(Entity)、值对象(Value Object)和领域服务(Domain Service)。
+
<> is a simple visual placeholder for our Domain
Model pattern. We'll fill in some details in this chapter, and as we move on to
other chapters, we'll build things around the domain model, but you should
always be able to find these little shapes at the core.
+<> 是我们领域模型模式的一个简单视觉占位符。
+在本章中我们会填充一些细节,随着进入其他章节,我们会围绕领域模型构建内容,
+但你始终应该能够在核心找到这些小形状。
+
[[maps_chapter_01_notext]]
-.A placeholder illustration of our domain model
+.A placeholder illustration of our domain model(我们领域模型的占位符示意图)
image::images/apwp_0101.png[]
[role="pagebreak-before less_space"]
=== What Is a Domain Model?
+什么是领域模型?
((("business logic layer")))
In the <>, we used the term _business logic layer_
@@ -27,6 +36,10 @@ the book, we're going to use the term _domain model_ instead. This is a term
from the DDD community that does a better job of capturing our intended meaning
(see the next sidebar for more on DDD).
+在 <> 中,我们使用了术语 _business logic layer_(业务逻辑层)来描述三层架构的中心层。
+在本书的其余部分,我们将改用术语 _domain model_(领域模型)。这是DDD(领域驱动设计)社区的一个术语,
+它更能准确表达我们的意图(有关DDD的更多信息,请参阅下一个边栏)。
+
((("domain driven design (DDD)", "domain, defined")))
The _domain_ is a fancy way of saying _the problem you're trying to solve._
Your authors currently work for an online retailer of furniture. Depending on
@@ -35,6 +48,10 @@ procurement, or product design, or logistics and delivery. Most programmers
spend their days trying to improve or automate business processes; the domain
is the set of activities that those processes support.
+“_Domain_”(领域)是一个较为花哨的说法,意思是“_你试图解决的问题_”。本书的作者目前为一家在线家具零售商工作。
+根据你所讨论的系统不同,领域可能是采购与供应、产品设计,或者物流与交付。大多数程序员每天的工作是试图改进或自动化业务流程;
+领域就是这些流程所支持的一组活动。
+
((("model (domain)")))
A _model_ is a map of a process or phenomenon that captures a useful property.
Humans are exceptionally good at producing models of things in their heads. For
@@ -45,22 +62,38 @@ intuitions about how objects behave at near-light speeds or in a vacuum because
our model was never designed to cover those cases. That doesn't mean the model
is wrong, but it does mean that some predictions fall outside of its domain.
+“_Model_”(模型)是对某个过程或现象的映射,其目的是捕捉其中一个有用的特性。人类在头脑中构建事物模型的能力尤为出色。
+例如,当有人向你扔一个球时,你几乎是下意识地预测出球的运动轨迹,因为你头脑中有一个关于物体在空间中如何运动的模型。
+当然,这个模型绝对称不上完美。比如,人类对物体在接近光速或真空中的行为直觉是非常糟糕的,
+因为我们的模型从未被设计用来涵盖这些情况。但这并不意味着模型是错误的,
+而是说明有些预测超出了它的领域范围。
+
The domain model is the mental map that business owners have of their
businesses. All business people have these mental maps--they're how humans think
about complex processes.
+领域模型是业务所有者对其业务的心智地图。所有的业务人士都有这样的心智地图——这是人类思考复杂流程的方式。
+
You can tell when they're navigating these maps because they use business speak.
Jargon arises naturally among people who are collaborating on complex systems.
+当他们在运用这些心智地图时,你可以通过他们使用的业务语言察觉到。行话(术语)是在人们共同协作处理复杂系统时自然产生的。
+
Imagine that you, our unfortunate reader, were suddenly transported light years
away from Earth aboard an alien spaceship with your friends and family and had
to figure out, from first principles, how to navigate home.
+想象一下,作为我们“不幸”的读者,你突然和你的朋友和家人一起被传送到一艘外星飞船上,飞离地球数光年远,
+并且不得不从基本原理开始,推导出如何导航回家。
+
In your first few days, you might just push buttons randomly, but soon you'd
learn which buttons did what, so that you could give one another instructions.
"Press the red button near the flashing doohickey and then throw that big
lever over by the radar gizmo," you might say.
+在最初的几天里,你可能会随意按下各种按钮,但很快你就会学会每个按钮的功能,这样你们就可以相互传递指令。
+你可能会说:“按下闪烁装置旁边的那个红色按钮,然后拉下雷达装置旁边的那个大杠杆。”
+
Within a couple of weeks, you'd become more precise as you adopted words to
describe the ship's functions: "Increase oxygen levels in cargo bay three"
or "turn on the little thrusters." After a few months, you'd have adopted
@@ -68,8 +101,12 @@ language for entire complex processes: "Start landing sequence" or "prepare
for warp." This process would happen quite naturally, without any formal effort
to build a shared glossary.
+几周之内,随着你们采用新的词汇来描述飞船的功能,你们的表达会变得更加精确:“增加三号货舱的氧气水平”或“启动小型推进器”。
+再过几个月,你们可能已经为整个复杂的流程采用了新的语言:“启动着陆程序”或“准备跳跃”。
+这一过程会非常自然地发生,而无需正式构建一个共享术语表的努力。
+
[role="nobreakinside less_space"]
-.This Is Not a DDD Book. You Should Read a DDD Book.
+.This Is Not a DDD Book. You Should Read a DDD Book.(这不是一本关于 DDD 的书。你应该读一本关于 DDD 的书。)
*****************************************************************
Domain-driven design, or DDD, popularized the concept of domain modeling,footnote:[
@@ -85,22 +122,43 @@ architecture patterns that we cover in this book—including Entity, Aggregate,
Value Object (see <>), and Repository (in
<>)—come from the DDD tradition.
+领域驱动设计(Domain-Driven Design,简称DDD)推广了领域建模的概念,脚注:[
+DDD 并非领域建模的起源。Eric Evans 提及了 Rebecca Wirfs-Brock 和 Alan McKean
+所著的 2002 年出版的《_Object Design_》(Addison-Wesley Professional),
+该书引入了责任驱动设计(Responsibility-Driven Design),而DDD是其一个专注于领域的特殊案例。
+但即便如此,时间点仍然显得较晚,面向对象(OO)的爱好者会告诉你可以更早回溯到 Ivar Jacobson 和 Grady Booch;
+这一术语自上世纪80年代中期就已存在。]
+通过专注于核心业务领域,DDD 在彻底改变人们的软件设计方式方面取得了巨大的成功。
+本书中涵盖的许多架构模式——包括实体(Entity)、聚合(Aggregate)、值对象(Value Object,
+详见 <>)以及仓储(Repository,详见 <>)——都源于DDD的传统。
+
In a nutshell, DDD says that the most important thing about software is that it
provides a useful model of a problem. If we get that model right, our
software delivers value and makes new things possible.
+简而言之,DDD 认为软件最重要的事情是它能够提供一个问题的有用模型。如果我们把这个模型设计正确,软件就能够创造价值,并使新的事物成为可能。
+
If we get the model wrong, it becomes an obstacle to be worked around. In this book,
we can show the basics of building a domain model, and building an architecture
around it that leaves the model as free as possible from external constraints,
so that it's easy to evolve and change.
+如果我们把模型设计错了,它就会成为需要绕开的障碍。在本书中,我们会展示构建领域模型的基础知识,以及围绕领域模型构建的架构,
+尽可能让模型不受外部约束的影响,以便它能够轻松演化和变更。
+
But there's a lot more to DDD and to the processes, tools, and techniques for
developing a domain model. We hope to give you a taste of it, though,
and cannot encourage you enough to go on and read a proper DDD book:
+但是,DDD 及其用于开发领域模型的流程、工具和技术还有更多内容可以探讨。我们希望能够让你初步了解这些内容,
+并强烈鼓励你进一步阅读一本真正的DDD专著:
+
* The original "blue book," _Domain-Driven Design_ by Eric Evans (Addison-Wesley Professional)
+原版的“蓝皮书”,Eric Evans 所著的《_领域驱动设计_》(艾迪生-韦斯利专业出版社)。
+
* The "red book," _Implementing Domain-Driven Design_
by Vaughn Vernon (Addison-Wesley Professional)
+“红皮书”,Vaughn Vernon 所著的《_实现领域驱动设计_》(艾迪生-韦斯利专业出版社)。
*****************************************************************
@@ -108,25 +166,36 @@ So it is in the mundane world of business. The terminology used by business
stakeholders represents a distilled understanding of the domain model, where
complex ideas and processes are boiled down to a single word or phrase.
+在平凡的商业世界中也是如此。业务利益相关者使用的术语代表了对领域模型的提炼理解,其中复杂的理念和流程被简化为一个词或短语。
+
When we hear our business stakeholders using unfamiliar words, or using terms
in a specific way, we should listen to understand the deeper meaning and encode
their hard-won experience into our software.
+当我们听到业务利益相关者使用不熟悉的词汇,或以特定方式使用术语时,我们应该仔细倾听,去理解其更深层次的含义,并将他们来之不易的经验融入到我们的软件中。
+
We're going to use a real-world domain model throughout this book, specifically
a model from our current employment. MADE.com is a successful furniture
retailer. We source our furniture from manufacturers all over the world and
sell it across Europe.
+在本书中,我们将使用一个真实世界的领域模型,具体来说,是来自我们当前工作的一个模型。MADE.com 是一家成功的家具零售商。我们从世界各地的制造商采购家具,并将其销往整个欧洲。
+
When you buy a sofa or a coffee table, we have to figure out how best
to get your goods from Poland or China or Vietnam and into your living room.
+当你购买一张沙发或一张咖啡桌时,我们需要解决如何将你的商品从波兰、中国或越南高效地送到你的客厅。
+
At a high level, we have separate systems that are responsible for buying
stock, selling stock to customers, and shipping goods to customers. A
system in the middle needs to coordinate the process by allocating stock
to a customer's orders; see <>.
+从宏观上看,我们有独立的系统分别负责采购库存、向客户销售库存以及向客户运输商品。
+而中间的一个系统需要通过将库存分配给客户的订单来协调整个流程;详见 <>。
+
[[allocation_context_diagram]]
-.Context diagram for the allocation service
+.Context diagram for the allocation service(分配服务的上下文图)
image::images/apwp_0102.png[]
[role="image-source"]
----
@@ -161,18 +230,28 @@ business has been presenting stock and lead times based on what is physically
available in the warehouse. If and when the warehouse runs out, a product is
listed as "out of stock" until the next shipment arrives from the manufacturer.
+为了本书的目的,我们假设业务决定实施一种令人兴奋的新方法来分配库存。到目前为止,
+业务一直是根据仓库中实际可用的库存和交货时间来展示商品的。如果仓库的库存耗尽,产品会被标记为“缺货”,
+直到下一批货物从制造商处到达为止。
+
Here's the innovation: if we have a system that can keep track of all our shipments
and when they're due to arrive, we can treat the goods on those ships as
real stock and part of our inventory, just with slightly longer lead times.
Fewer goods will appear to be out of stock, we'll sell more, and the business
can save money by keeping lower inventory in the domestic warehouse.
+创新之处在于:如果我们有一个系统可以追踪所有发货信息以及到货时间,我们就可以将那些在途货物视为真实库存并作为库存的一部分,
+只是交货时间稍长一些。这样一来,缺货的商品会减少,我们会卖出更多商品,同时业务也可以通过降低国内仓库的库存量来节省成本。
+
But allocating orders is no longer a trivial matter of decrementing a single
quantity in the warehouse system. We need a more complex allocation mechanism.
Time for some domain modeling.
+但是,分配订单不再是简单地减少仓库系统中的某个数量这么简单了。我们需要一个更复杂的分配机制。是时候进行领域建模了。
+
=== Exploring the Domain Language
+探索领域语言
((("domain language")))
((("domain modeling", "domain language")))
@@ -181,62 +260,100 @@ have an initial conversation with our business experts and agree on a glossary
and some rules for the first minimal version of the domain model. Wherever
possible, we ask for concrete examples to illustrate each rule.
+理解领域模型需要时间、耐心以及便利贴。我们与业务专家进行初步讨论,并为领域模型的第一个最小版本确定一个词汇表和一些规则。
+在可能的情况下,我们会要求提供具体的示例来说明每条规则。
+
We make sure to express those rules in the business jargon (the _ubiquitous
language_ in DDD terminology). We choose memorable identifiers for our objects
so that the examples are easier to talk about.
+我们确保使用业务术语(在 DDD 术语中称为 _通用语言(ubiquitous language)_ )来表达这些规则。我们为对象选择易于记忆的标识符,这样可以更方便地讨论这些示例。
+
<> shows some notes we might have taken while having a
conversation with our domain experts about allocation.
+<> 展示了我们在与领域专家讨论分配时可能记录的一些笔记。
+
[[allocation_notes]]
-.Some Notes on Allocation
+.Some Notes on Allocation(一些关于分配的笔记)
****
A _product_ is identified by a _SKU_, pronounced "skew," which is short for _stock-keeping unit_. _Customers_ place _orders_. An order is identified by an _order reference_
and comprises multiple _order lines_, where each line has a _SKU_ and a _quantity_. For example:
+_产品(product)_ 通过 _SKU_(读作“思 硌优”,是库存管理单元的缩写)进行标识。_客户(Customer)_ 会下达 _订单(order)_ 。一个订单通过一个 _订单引用(order reference)_ 来标识,
+并包含多个 _订单项(order line)_ ,每个订单项都有一个 _SKU_ 和 _数量(quantity)_ 。例如:
+
- 10 units of RED-CHAIR
+(10 件 RED-CHAIR)
- 1 unit of TASTELESS-LAMP
+(1 件 TASTELESS-LAMP)
The purchasing department orders small _batches_ of stock. A _batch_ of stock has a unique ID called a _reference_, a _SKU_, and a _quantity_.
+采购部门会订购小的 _批次(batch)_ 库存。一个 _批次(batch)_ 库存具备一个名为 _引用(reference)_ 的唯一 ID、一个 _SKU_ 和一个 _数量(quantity)_。
+
We need to _allocate_ _order lines_ to _batches_. When we've allocated an
order line to a batch, we will send stock from that specific batch to the
customer's delivery address. When we allocate _x_ units of stock to a batch, the _available quantity_ is reduced by _x_. For example:
+我们需要将 _订单项(order line)_ _分配(allocate)_ 到 _批次(batch)_ 。当我们将某条订单项分配到某个批次时,我们会从该特定批次发送库存到客户的配送地址。
+当我们将 _x_ 单位的库存分配到一个批次时,该批次的 _可用数量(available quantity)_ 会减少 _x_。例如:
+
- We have a batch of 20 SMALL-TABLE, and we allocate an order line for 2 SMALL-TABLE.
+我们有一个包含 20 件 SMALL-TABLE 的批次,并分配了一个包含 2 件 SMALL-TABLE 的订单项。
- The batch should have 18 SMALL-TABLE remaining.
+该批次应剩余 18 件 SMALL-TABLE。
We can't allocate to a batch if the available quantity is less than the quantity of the order line. For example:
+如果批次的可用数量小于订单项的数量,我们就无法分配。例如:
+
- We have a batch of 1 BLUE-CUSHION, and an order line for 2 BLUE-CUSHION.
+我们有一个包含 1 件 BLUE-CUSHION 的批次,以及一个包含 2 件 BLUE-CUSHION 的订单项。
- We should not be able to allocate the line to the batch.
+我们不应该将该订单项分配到该批次中。
We can't allocate the same line twice. For example:
+我们不能将同一个订单项分配两次。例如:
+
- We have a batch of 10 BLUE-VASE, and we allocate an order line for 2 BLUE-VASE.
+我们有一个包含 10 件 BLUE-VASE 的批次,并分配了一个包含 2 件 BLUE-VASE 的订单项。
- If we allocate the order line again to the same batch, the batch should still
have an available quantity of 8.
+如果我们再次将该订单项分配到同一个批次中,该批次的可用数量仍应为 8。
Batches have an _ETA_ if they are currently shipping, or they may be in _warehouse stock_. We allocate to warehouse stock in preference to shipment batches. We allocate to shipment batches in order of which has the earliest ETA.
+
+批次如果当前正在运输,则有一个 _ETA(预计到达时间)_ ,否则可能在 _仓库库存(warehouse stock)_ 中。
+我们优先将订单分配给仓库库存,而不是运输批次。对于运输批次,我们按预计到达时间最早的顺序进行分配。
****
=== Unit Testing Domain Models
+领域模型的单元测试
((("unit testing", "of domain models", id="ix_UTDM")))
((("domain modeling", "unit testing domain models", id="ix_dommodUT")))
We're not going to show you how TDD works in this book, but we want to show you
how we would construct a model from this business conversation.
+我们不会在本书中向你展示TDD的工作原理,但我们想向你展示我们如何从这场业务对话中构建模型。
+
[role="nobreakinside less_space"]
-.Exercise for the Reader
+.Exercise for the Reader(读者练习)
******************************************************************************
Why not have a go at solving this problem yourself? Write a few unit tests to
see if you can capture the essence of these business rules in nice, clean
code (ideally without looking at the solution we came up with below!)
+为什么不自己动手尝试解决这个问题呢?编写一些单元测试,看看是否可以用优雅、简洁的代码捕捉这些业务规则的核心(最好不要偷看我们下面提出的解决方案!)
+
You'll find some https://github.com/cosmicpython/code/tree/chapter_01_domain_model_exercise[placeholder unit tests on GitHub], but you could just start from
scratch, or combine/rewrite them however you like.
+你会在 https://github.com/cosmicpython/code/tree/chapter_01_domain_model_exercise[GitHub 上找到一些占位单元测试],
+但你也可以从头开始,或者随意组合/重写它们。
+
//TODO: add test_cannot_allocate_same_line_twice ?
//(EJ3): nice to have for completeness, but not necessary
@@ -244,8 +361,10 @@ scratch, or combine/rewrite them however you like.
Here's what one of our first tests might look like:
+以下是我们最初的一个测试可能的样子:
+
[[first_test]]
-.A first test for allocation (test_batches.py)
+.A first test for allocation (test_batches.py)(一个关于分配的初步测试)
====
[source,python]
----
@@ -264,11 +383,16 @@ system, and the names of the classes and variables that we use are taken from th
business jargon. We could show this code to our nontechnical coworkers, and
they would agree that this correctly describes the behavior of the system.
+我们的单元测试名称描述了我们期望系统表现出的行为,而我们使用的类名和变量名来源于业务术语。
+我们可以将这段代码展示给我们的非技术同事,他们会认可这段代码正确地描述了系统的行为。
+
[role="pagebreak-before"]
And here is a domain model that meets our requirements:
+以下是一个符合我们需求的领域模型:
+
[[domain_model_1]]
-.First cut of a domain model for batches (model.py)
+.First cut of a domain model for batches (model.py)(批次领域模型的初步构建)
====
[source,python]
[role="non-head"]
@@ -296,6 +420,8 @@ class Batch:
with no behavior.footnote:[In previous Python versions, we
might have used a namedtuple. You could also check out Hynek Schlawack's
excellent https://pypi.org/project/attrs[attrs].]
+`OrderLine` 是一个不可变的 dataclass,没有任何行为。脚注:[在早期版本的 _Python_ 中,
+我们可能会使用 namedtuple。你也可以去了解一下 Hynek Schlawack 出色的 https://pypi.org/project/attrs[attrs]。]
<2> We're not showing imports in most code listings, in an attempt to keep them
clean. We're hoping you can guess
@@ -304,12 +430,17 @@ class Batch:
anything, you can see the full working code for each chapter in
its branch (e.g.,
https://github.com/cosmicpython/code/tree/chapter_01_domain_model[chapter_01_domain_model]).
+在大多数代码清单中,我们没有展示导入内容,以尽量保持简洁。我们希望你能猜到这是通过 `from dataclasses import dataclass` 引入的;
+同样的还有 `typing.Optional` 和 `datetime.date`。如果你想核实任何内容,可以在相应分支中查看每章的完整可运行代码
+(例如,https://github.com/cosmicpython/code/tree/chapter_01_domain_model[chapter_01_domain_model])。
<3> Type hints are still a matter of controversy in the Python world. For
domain models, they can sometimes help to clarify or document what the
expected arguments are, and people with IDEs are often grateful for them.
You may decide the price paid in terms of readability is too high.
((("type hints")))
+类型提示在 _Python_ 世界中仍然是一个有争议的话题。对于领域模型来说,它们有时可以帮助澄清或记录预期的参数是什么,
+而使用 IDE 的人通常会对此表示感激。不过你可能会认为为此付出的可读性代价过高。
Our implementation here is trivial:
a `Batch` just wraps an integer `available_quantity`,
@@ -320,11 +451,22 @@ Or perhaps you think there's not enough code?
What about some sort of check that the SKU in the `OrderLine` matches `Batch.sku`?
We saved some thoughts on validation for <>.]
+我们的实现非常简单:
+一个 `Batch` 只是包装了一个整数 `available_quantity`,
+我们在分配时对这个值进行递减。
+我们写了相当多的代码,只是为了实现从一个数字中减去另一个数字,
+但我们认为,精确地建模我们的领域会有所回报。脚注:
+[或者你认为代码还不够?
+那是否应该加入某种检查,用于验证 `OrderLine` 中的 SKU 是否匹配 `Batch.sku`?
+关于校验的一些想法,我们保存在了 <> 中。]
+
Let's write some new failing tests:
+让我们编写一些新的失败测试:
+
[[test_can_allocate]]
-.Testing logic for what we can allocate (test_batches.py)
+.Testing logic for what we can allocate (test_batches.py)(测试可分配内容的逻辑)
====
[source,python]
----
@@ -359,12 +501,18 @@ the same SKU; and we've written four simple tests for a new method
`can_allocate`. Again, notice that the names we use mirror the language of our
domain experts, and the examples we agreed upon are directly written into code.
+这里没有什么太出乎意料的地方。我们对测试套件进行了重构,以避免为同一个 SKU 创建批次和订单项时重复相同的代码;
+然后我们为新方法 `can_allocate` 编写了四个简单的测试。同样需要注意的是,我们使用的名称反映了领域专家的语言,
+而我们事先商定的示例也被直接编写进了代码中。
+
We can implement this straightforwardly, too, by writing the `can_allocate`
method of `Batch`:
+我们也可以通过编写 `Batch` 的 `can_allocate` 方法来简单直接地实现这一点:
+
[[can_allocate]]
-.A new method in the model (model.py)
+.A new method in the model (model.py)(模型中的一个新方法)
====
[source,python]
----
@@ -377,9 +525,12 @@ So far, we can manage the implementation by just incrementing and decrementing
`Batch.available_quantity`, but as we get into `deallocate()` tests, we'll be
forced into a more intelligent solution:
+到目前为止,我们可以仅通过增加和减少 `Batch.available_quantity` 来管理实现,
+但随着我们进入 `deallocate()` 测试时,我们将不得不采用一个更智能的解决方案:
+
[role="pagebreak-before"]
[[test_deallocate_unallocated]]
-.This test is going to require a smarter model (test_batches.py)
+.This test is going to require a smarter model (test_batches.py)(此测试将需要一个更智能的模型)
====
[source,python]
----
@@ -396,8 +547,11 @@ needs to understand which lines have been allocated. Let's look at the
implementation:
+在这个测试中,我们断言从批次中解除一个订单项分配没有任何效果,除非该批次之前已经分配了该订单项。为了实现这一点,
+我们的 `Batch` 需要了解哪些订单项已被分配。让我们来看一下实现:
+
[[domain_model_complete]]
-.The domain model now tracks allocations (model.py)
+.The domain model now tracks allocations (model.py)(领域模型现在能够跟踪分配情况)
====
[source,python]
[role="non-head"]
@@ -439,7 +593,7 @@ class Batch:
[[model_diagram]]
-.Our model in UML
+.Our model in UML(我们的模型以 UML 表示)
image::images/apwp_0103.png[]
[role="image-source"]
----
@@ -473,16 +627,24 @@ Now we're getting somewhere! A batch now keeps track of a set of allocated
just add to the set. Our `available_quantity` is now a calculated property:
purchased quantity minus allocated quantity.
+现在我们有点进展了!一个批次现在会跟踪一组已分配的 `OrderLine` 对象。当我们进行分配时,如果有足够的可用数量,我们就将订单项添加到集合中。
+我们的 `available_quantity` 现在是一个计算属性:采购数量减去分配数量。
+
Yes, there's plenty more we could do. It's a little disconcerting that
both `allocate()` and `deallocate()` can fail silently, but we have the
basics.
+是的,我们还有很多可以改进的地方。目前有些令人不安的是,`allocate()` 和 `deallocate()` 都可能以静默方式失败,
+但我们已经实现了基础功能。
+
Incidentally, using a set for `._allocations` makes it simple for us
to handle the last test, because items in a set are unique:
+顺便提一下,使用集合 (`set`) 来存储 `._allocations` 使我们可以轻松处理最后一个测试,因为集合中的元素是唯一的:
+
[[last_test]]
-.Last batch test! (test_batches.py)
+.Last batch test! (test_batches.py)(最后一个批次测试!)
====
[source,python]
----
@@ -506,6 +668,12 @@ warehouse in a different region if we're out of stock in the home region. And
so on. A real business in the real world knows how to pile on complexity faster
than we can show on the page!
+目前,批评领域模型过于简单,以至于无需使用领域驱动设计(DDD)(甚至不用面向对象编程!)可能是合理的。
+在现实生活中,会出现无数的业务规则和边界情况:例如,客户可能会要求在特定的未来日期送货,
+这意味着我们可能不希望将他们的订单分配到最早的批次。一些SKU(库存单位)并不在批次中,而是直接从供应商按需订购,
+因此它们遵循不同的逻辑。根据客户所在的位置,我们只能将订单分配给他们所在区域内的一部分仓库和运输点——不过有些SKU在家乡区域库存不足时,
+我们也愿意从其他区域的仓库发货。诸如此类的复杂情况数不胜数!现实世界中的真实业务堆叠复杂性的速度,比我们在页面上展示的还要快!
+
But taking this simple domain model as a placeholder for something more
complex, we're going to extend our simple domain model in the rest of the book
and plug it into the real world of APIs and databases and spreadsheets. We'll
@@ -513,16 +681,21 @@ see how sticking rigidly to our principles of encapsulation and careful
layering will help us to avoid a ball of mud.
+不过,我们将把这个简单的领域模型作为更复杂事物的占位符,并在本书的其余部分扩展这个简单的领域模型,
+将其融入真实世界中的 APIs、数据库和电子表格。我们会看到,坚持封装原则和精心设计的分层结构,将如何帮助我们避免陷入一团混乱。
+
[role="nobreakinside"]
-.More Types for More Type Hints
+.More Types for More Type Hints(更多类型以加强类型提示)
*******************************************************************************
((("type hints")))
If you really want to go to town with type hints, you could go so far as
wrapping primitive types by using `typing.NewType`:
+如果你真的想在类型提示上大展身手,可以通过使用 `typing.NewType` 将原始类型包装起来:
+
[[too_many_types]]
-.Just taking it way too far, Bob
+.Just taking it way too far, Bob(这也太过分了,Bob)
====
[source,python]
[role="skip"]
@@ -547,11 +720,16 @@ class Batch:
That would allow our type checker to make sure that we don't pass a `Sku` where a
`Reference` is expected, for example.
+例如,这将允许我们的类型检查器确保我们不会在需要 `Reference` 的地方误传入一个 `Sku`。
+
Whether you think this is wonderful or appalling is a matter of debate.footnote:[It is appalling. Please, please don't do this. —Harry]
+你认为这是绝妙的还是糟糕的,这方面见仁见智。脚注:[这是糟糕的,拜托,千万别这么做。——Harry]
+
*******************************************************************************
==== Dataclasses Are Great for Value Objects
+数据类非常适合作为值对象
((("value objects", "using dataclasses for")))
((("dataclasses", "use for value objects")))
@@ -561,9 +739,12 @@ line? In our business language, an _order_ has multiple _line_ items, where
each line has a SKU and a quantity. We can imagine that a simple YAML file
containing order information might look like this:
+在之前的代码示例中,我们广泛使用了 `line`,但什么是 line 呢?在我们的业务语言中,一个 _订单_(order)包含多个 _订单项_(line)项目,
+其中每个订单项都有一个 SKU 和一个数量。我们可以想象一个简单的包含订单信息的 YAML 文件可能如下所示:
+
[[yaml_order_example]]
-.Order info as YAML
+.Order info as YAML(以YAML格式表示的订单信息)
====
[source,yaml]
[role="skip"]
@@ -585,16 +766,22 @@ Notice that while an order has a _reference_ that uniquely identifies it, a
_line_ does not. (Even if we add the order reference to the `OrderLine` class,
it's not something that uniquely identifies the line itself.)
+请注意,虽然一个订单有一个能够唯一标识它的 _reference_(引用),但一个 _line_(订单项)没有。
+(即使我们将订单的引用添加到 `OrderLine` 类中,它也无法唯一标识订单项本身。)
+
((("value objects", "defined")))
Whenever we have a business concept that has data but no identity, we
often choose to represent it using the _Value Object_ pattern. A _value object_ is any
domain object that is uniquely identified by the data it holds; we usually
make them immutable:
+当我们遇到某个具有数据但没有唯一标识的业务概念时,我们通常会选择用 _值对象_(Value Object)模式来表示它。
+一个 _值对象_ 是能够由其持有的数据唯一标识的领域对象;我们通常将它们设计为不可变的:
+
// [SG] seems a bit odd to hear about value objects before any mention of entities.
[[orderline_value_object]]
-.OrderLine is a value object
+.OrderLine is a value object(OrderLine 是一个值对象)
====
[source,python]
[role="skip"]
@@ -612,9 +799,12 @@ One of the nice things that dataclasses (or namedtuples) give us is _value
equality_, which is the fancy way of saying, "Two lines with the same `orderid`,
`sku`, and `qty` are equal."
+数据类(或 namedtuples)提供的一个好处是 _值相等_(value equality),这是一个高大上的说法,
+用来表达:“两个具有相同 `orderid`、`sku` 和 `qty` 的订单项是相等的。”
+
[[more_value_objects]]
-.More examples of value objects
+.More examples of value objects(更多值对象的示例)
====
[source,python]
[role="skip"]
@@ -650,9 +840,13 @@ product code, and quantity. We can still have complex behavior on a value
object, though. In fact, it's common to support operations on values; for
example, mathematical operators:
+这些值对象符合我们对其值在现实世界中如何运作的直观理解。我们谈论的究竟是 _哪张_ 10英镑纸币并不重要,因为它们的面值是相同的。
+同样地,如果名字和姓氏都相同,那么两个姓名就是相等的;而如果两个订单项具有相同的客户订单、产品代码和数量,它们也是等价的。
+不过,值对象仍然可以具有复杂的行为。事实上,支持基于值的操作是很常见的,比如数学运算符操作:
+
[[value_object_maths_tests]]
-.Testing Math with value objects
+.Testing Math with value objects(使用值对象测试数学运算)
====
[source,python]
[role="skip"]
@@ -685,8 +879,10 @@ def multiplying_two_money_values_is_an_error():
To get those tests to actually pass you'll need to start implementing some
magic methods on our `Money` class:
+为了让那些测试真正通过,你需要开始在我们的 `Money` 类上实现一些魔术方法:
+
[[value_object_maths]]
-.Implementing Math with value objects
+.Implementing Math with value objects(使用值对象实现数学运算)
====
[source,python]
[role="skip"]
@@ -707,6 +903,7 @@ class Money:
==== Value Objects and Entities
+值对象与实体
((("value objects", "and entities", secondary-sortas="entities")))
((("domain modeling", "unit testing domain models", "value objects and entities")))
@@ -716,17 +913,26 @@ value object: any object that is identified only by its data and doesn't have a
long-lived identity. What about a batch, though? That _is_ identified by a
reference.
+一个订单项是由其订单ID、SKU 和数量唯一标识的;如果我们更改其中的一个值,就得到了一个新的订单项。
+这就是值对象的定义:任何仅由其数据标识且没有长期存在标识的对象。
+那么,对于一个批次(batch)呢?它是由一个引用(reference)标识的。
+
((("entities", "defined")))
We use the term _entity_ to describe a domain object that has long-lived
identity. On the previous page, we introduced a `Name` class as a value object.
If we take the name Harry Percival and change one letter, we have the new
`Name` object Barry Percival.
+我们使用术语 _实体_(entity)来描述具有长期标识的领域对象。在前一页中,我们引入了一个作为值对象的 `Name` 类。
+如果我们将名字 "Harry Percival" 改变一个字母,就会得到一个新的 `Name` 对象 "Barry Percival"。
+
It should be clear that Harry Percival is not equal to Barry Percival:
+显然,Harry Percival 不等于 Barry Percival:
+
[[test_equality]]
-.A name itself cannot change...
+.A name itself cannot change...(名字本身无法改变...)
====
[source,python]
[role="skip"]
@@ -742,9 +948,12 @@ marital status, and even their gender, but we continue to recognize them as the
same individual. That's because humans, unlike names, have a persistent
_identity_:
+但是作为一个 _人_ 的 Harry 呢?人可以改变他们的名字、婚姻状况,甚至性别,但是我们仍然将他们视为同一个个体。
+这是因为人类与名字不同,拥有一个持久的 _身份_:
+
[[person_identity]]
-.But a person can!
+.But a person can!(但一个人可以!)
====
[source,python]
[role="skip"]
@@ -774,14 +983,19 @@ and they are still recognizably the same thing. Batches, in our example, are
entities. We can allocate lines to a batch, or change the date that we expect
it to arrive, and it will still be the same entity.
+实体与值对象不同,具有 _身份相等_(identity equality)。我们可以更改它们的值,但它们仍然可以被识别为同一个事物。
+在我们的示例中,批次(batches)是实体。我们可以将订单项分配到一个批次,或者更改我们期望它到达的日期,但它仍然是同一个实体。
+
((("equality operators, implementing on entities")))
We usually make this explicit in code by implementing equality operators on
entities:
+我们通常通过在实体上实现相等运算符来在代码中显式表达这一点:
+
[[equality_on_batches]]
-.Implementing equality operators (model.py)
+.Implementing equality operators (model.py)(实现等价运算符)
====
[source,python]
----
@@ -804,6 +1018,9 @@ Python's +++__eq__+++ magic method
defines the behavior of the class for the `==` operator.footnote:[The
+++__eq__+++ method is pronounced "dunder-EQ." By some, at least.]
+_Python_ 的 +++__eq__+++ 魔术方法定义了类在 `==` 运算符下的行为。
+脚注:[+++__eq__+++ 方法的发音是“dunder-EQ”(双下划线 EQ),至少对某些人来说是这样的。]
+
((("magic methods", "__hash__", secondary-sortas="hash")))
((("__hash__ magic method", primary-sortas="hash")))
For both entity and value objects, it's also worth thinking through how
@@ -811,10 +1028,15 @@ For both entity and value objects, it's also worth thinking through how
behavior of objects when you add them to sets or use them as dict keys;
you can find more info https://oreil.ly/YUzg5[in the Python docs].
+对于实体和值对象,同样值得深入思考 +++__hash__+++ 的工作原理。这是 _Python_ 用来控制对象在被添加到
+集合(sets)中或用作字典(dict)键时行为的魔术方法;更多信息可以参考 https://oreil.ly/YUzg5[Python 官方文档]。
+
For value objects, the hash should be based on all the value attributes,
and we should ensure that the objects are immutable. We get this for
free by specifying `@frozen=True` on the dataclass.
+对于值对象,哈希值应基于所有的值属性,并且我们应确保这些对象是不可变的。通过在数据类上指定 `@frozen=True`,我们可以免费获得这一特性。
+
For entities, the simplest option is to say that the hash is ++None++, meaning
that the object is not hashable and cannot, for example, be used in a set.
If for some reason you decide you really do want to use set or dict operations
@@ -822,6 +1044,9 @@ with entities, the hash should be based on the attribute(s), such as
`.reference`, that defines the entity's unique identity over time. You should
also try to somehow make _that_ attribute read-only.
+对于实体,最简单的选择是将哈希值设置为 ++None++,这意味着对象是不可哈希的,因此不能用于集合(set)中。例如,如果出于某些原因你确实想对实体
+使用集合或字典操作,哈希值应基于那些定义实体唯一标识的属性,比如 `.reference`。同时,你还应该尽量使 _该_ 属性只读。
+
WARNING: This is tricky territory; you shouldn't modify +++__hash__+++
without also modifying +++__eq__+++. If you're not sure what
you're doing, further reading is suggested.
@@ -829,20 +1054,24 @@ WARNING: This is tricky territory; you shouldn't modify +++__hash__
Hynek Schlawack is a good place to start.
((("unit testing", "of domain models", startref="ix_UTDM")))
((("domain modeling", "unit testing domain models", startref="ix_dommodUT")))
-
+这是一个棘手的领域;如果你修改了 +++__hash__+++,同时也需要修改 +++__eq__+++。
+如果你不确定自己在做什么,建议进一步阅读相关内容。可以从我们的技术审阅者 Hynek Schlawack 所著的 https://oreil.ly/vxkgX[《Python Hashes and Equality》] 开始学习。
=== Not Everything Has to Be an Object: A Domain Service Function
+并不是所有东西都必须是对象:领域服务函数
((("domain services")))
((("domain modeling", "functions for domain services", id="ix_dommodfnc")))
We've made a model to represent batches, but what we actually need
to do is allocate order lines against a specific set of batches that
represent all our stock.
+我们已经创建了一个用于表示批次的模型,但我们实际需要做的是将订单项分配到表示我们所有库存的一组特定批次中。
[quote, Eric Evans, Domain-Driven Design]
____
Sometimes, it just isn't a thing.
+有时候,它根本就不需要是一个“东西”。
____
((("service-layer services vs. domain services")))
@@ -858,11 +1087,19 @@ function, and we can take advantage of the fact that Python is a multiparadigm
language and just make it a function.
((("domain services", "function for")))
+Evans 讨论了领域服务(Domain Service)的操作,这些操作在实体或值对象中没有一个自然的归宿。
+脚注:[领域服务与<>中的服务并不是同一个概念,尽管它们常常密切相关。
+领域服务代表的是一个业务概念或流程,而服务层服务代表的是应用程序的一个用例。通常服务层会调用领域服务。]
+一个用于在给定一组批次的情况下分配订单项的“东西”,听起来更像是一个函数。我们可以利用 _Python_ 是一种多范式语言的特点,
+直接将其实现为一个函数。
+
Let's see how we might test-drive such a function:
+让我们来看一下如何通过测试驱动的方式构建这样一个函数:
+
[[test_allocate]]
-.Testing our domain service (test_allocate.py)
+.Testing our domain service (test_allocate.py)(测试我们的领域服务)
====
[source,python]
----
@@ -902,9 +1139,11 @@ def test_returns_allocated_batch_ref():
((("functions", "for domain services")))
And our service might look like this:
+我们的服务可能看起来像这样:
+
[[domain_service]]
-.A standalone function for our domain service (model.py)
+.A standalone function for our domain service (model.py)(为我们的领域服务创建一个独立函数)
====
[source,python]
[role="non-head"]
@@ -917,6 +1156,7 @@ def allocate(line: OrderLine, batches: List[Batch]) -> str:
====
==== Python's Magic Methods Let Us Use Our Models with Idiomatic Python
+_Python_ 的魔法方法让我们可以用惯用的 _Python_ 风格来使用我们的模型
((("__gt__ magic method", primary-sortas="gt")))
((("magic methods", "allowing use of domain model with idiomatic Python")))
@@ -924,12 +1164,16 @@ You may or may not like the use of `next()` in the preceding code, but we're pre
sure you'll agree that being able to use `sorted()` on our list of
batches is nice, idiomatic Python.
+你可能会喜欢或不喜欢前面代码中使用 `next()`,但我们很确定你会同意能够对我们的批次列表使用 `sorted()` 是不错的、符合 _Python_ 惯用风格的做法。
+
To make it work, we implement +++__gt__+++ on our domain model:
+为了让其正常工作,我们在我们的领域模型上实现了 +++__gt__+++:
+
[[dunder_gt]]
-.Magic methods can express domain semantics (model.py)
+.Magic methods can express domain semantics (model.py)(魔术方法可以表达领域语义)
====
[source,python]
----
@@ -947,8 +1191,11 @@ class Batch:
That's lovely.
+那真是太好了。
+
==== Exceptions Can Express Domain Concepts Too
+异常也可以表达领域概念
((("domain exceptions")))
((("exceptions", "expressing domain concepts")))
@@ -957,9 +1204,12 @@ concepts too. In our conversations with domain experts, we've learned about the
possibility that an order cannot be allocated because we are _out of stock_,
and we can capture that by using a _domain exception_:
+我们还有一个最后的概念需要探讨:异常也可以用来表达领域概念。在与领域专家的交流中,我们了解到订单可能无法分配的情况,
+因为我们处于 _缺货_ 状态,我们可以通过使用 _领域异常_ 来捕获这种情况:
+
[[test_out_of_stock]]
-.Testing out-of-stock exception (test_allocate.py)
+.Testing out-of-stock exception (test_allocate.py)(测试缺货异常)
====
[source,python]
----
@@ -974,15 +1224,16 @@ def test_raises_out_of_stock_exception_if_cannot_allocate():
[role="nobreakinside"]
-.Domain Modeling Recap
+.Domain Modeling Recap(领域建模回顾)
*****************************************************************
-Domain modeling::
+Domain modeling(领域建模)::
This is the part of your code that is closest to the business,
the most likely to change, and the place where you deliver the
most value to the business. Make it easy to understand and modify.
((("domain modeling", startref="ix_dommod")))
+这是你的代码中最贴近业务的部分,也是最有可能发生变化的地方,同时也是你为业务带来最大价值的地方。确保它易于理解和修改。
-Distinguish entities from value objects::
+Distinguish entities from value objects(区分实体与值对象)::
A value object is defined by its attributes. It's usually best
implemented as an immutable type. If you change an attribute on
a Value Object, it represents a different object. In contrast,
@@ -991,21 +1242,27 @@ Distinguish entities from value objects::
an entity (usually some sort of name or reference field).
((("entities", "value objects versus")))
((("value objects", "entities versus")))
+值对象由其属性定义。通常最好将其实现为不可变类型。如果你更改值对象的一个属性,它就代表了一个不同的对象。
+相比之下,实体的属性可能会随时间变化,但它仍然是同一个实体。关键是要定义清楚是什么 _确实_ 唯一标识一个实体(通常是某种名称或引用字段)。
-Not everything has to be an object::
+Not everything has to be an object(并不是所有东西都必须是对象)::
Python is a multiparadigm language, so let the "verbs" in your
code be functions. For every `FooManager`, `BarBuilder`, or `BazFactory`,
there's often a more expressive and readable `manage_foo()`, `build_bar()`,
or `get_baz()` waiting to happen.
((("functions")))
+_Python_ 是一门多范式语言,所以让代码中的“动词”成为函数。对于每一个 `FooManager`、`BarBuilder` 或 `BazFactory`,
+通常可以找到更加具有表现力和可读性的 `manage_foo()`、`build_bar()` 或 `get_baz()` 来代替。
-This is the time to apply your best OO design principles::
+This is the time to apply your best OO design principles(这是应用你最佳面向对象设计原则的时候。)::
Revisit the SOLID principles and all the other good heuristics like "has a versus is-a,"
"prefer composition over inheritance," and so on.
((("object-oriented design principles")))
+重新审视 SOLID 原则以及其他优秀的设计启发,比如“有一个(Has-a) vs 是一个(Is-a)”、“优先使用组合而非继承”等等。
-You'll also want to think about consistency boundaries and aggregates::
+You'll also want to think about consistency boundaries and aggregates(你还需要考虑一致性边界和聚合)::
But that's a topic for <>.
+但这是 <> 的主题。
*****************************************************************
@@ -1013,9 +1270,12 @@ We won't bore you too much with the implementation, but the main thing
to note is that we take care in naming our exceptions in the ubiquitous
language, just as we do our entities, value objects, and services:
+我们不会通过过多的实现细节让你感到枯燥,但需要注意的主要一点是,我们在通用语言中命名异常时,
+与命名我们的实体、值对象和服务一样,需格外用心:
+
[[out_of_stock]]
-.Raising a domain exception (model.py)
+.Raising a domain exception (model.py)(抛出领域异常)
====
[source,python]
----
@@ -1035,10 +1295,14 @@ def allocate(line: OrderLine, batches: List[Batch]) -> str:
<> is a visual representation of where we've ended up.
+<> 是我们最终结果的视觉表示。
+
[[maps_chapter_01_withtext]]
-.Our domain model at the end of the chapter
+.Our domain model at the end of the chapter(本章末尾的领域模型)
image::images/apwp_0104.png[]
((("domain modeling", "functions for domain services", startref="ix_dommodfnc")))
That'll probably do for now! We have a domain service that we can use for our
first use case. But first we'll need a database...
+
+到这里应该差不多了!我们已经有了一个可以用于首个用例的领域服务。但首先,我们需要一个数据库...
diff --git a/chapter_02_repository.asciidoc b/chapter_02_repository.asciidoc
index cd7bd7fe..0d546367 100644
--- a/chapter_02_repository.asciidoc
+++ b/chapter_02_repository.asciidoc
@@ -1,9 +1,12 @@
[[chapter_02_repository]]
== Repository Pattern
+仓储模式
It's time to make good on our promise to use the dependency inversion principle as
a way of decoupling our core logic from infrastructural concerns.
+是时候兑现我们的承诺,使用依赖倒置原则将核心逻辑与基础设施问题解耦了。
+
((("storage", seealso="repositories; Repository pattern")))
((("Repository pattern")))
((("data storage, Repository pattern and")))
@@ -12,11 +15,16 @@ allowing us to decouple our model layer from the data layer. We'll present a
concrete example of how this simplifying abstraction makes our system more
testable by hiding the complexities of the database.
+我们将引入 _仓储_ 模式,这是一种对数据存储的简化抽象,能够让我们的模型层与数据层解耦。
+我们会提供一个具体示例,展示这种简化抽象如何通过隐藏数据库的复杂性,使我们的系统更具可测试性。
+
<> shows a little preview of what we're going to build:
a `Repository` object that sits between our domain model and the database.
+<> 简要预览了我们将要构建的内容:一个位于领域模型和数据库之间的 `Repository` 对象。
+
[[maps_chapter_02]]
-.Before and after the Repository pattern
+.Before and after the Repository pattern(应用仓储模式前后对比)
image::images/apwp_0201.png[]
[TIP]
@@ -24,6 +32,8 @@ image::images/apwp_0201.png[]
The code for this chapter is in the
chapter_02_repository branch https://oreil.ly/6STDu[on GitHub].
+本章的代码位于 GitHub 上的 chapter_02_repository 分支 https://oreil.ly/6STDu[链接]。
+
----
git clone https://github.com/cosmicpython/code.git
cd code
@@ -35,6 +45,7 @@ git checkout chapter_01_domain_model
=== Persisting Our Domain Model
+持久化我们的领域模型
((("domain model", "persisting")))
In <> we built a simple domain model that can allocate orders
@@ -43,29 +54,45 @@ there aren't any dependencies or infrastructure to set up. If we needed to run
a database or an API and create test data, our tests would be harder to write
and maintain.
+在 <> 中,我们构建了一个简单的领域模型,它可以将订单分配给库存批次。
+因为这段代码没有任何依赖或基础设施需要设置,所以我们很容易为其编写测试。
+如果我们需要运行一个数据库或 API 并创建测试数据,那么测试将会更难编写和维护。
+
Sadly, at some point we'll need to put our perfect little model in the hands of
users and contend with the real world of spreadsheets and web
browsers and race conditions. For the next few chapters we're going to look at
how we can connect our idealized domain model to external state.
+遗憾的是,某些时候我们需要将我们完美的小模型交到用户手中,并应对现实世界中存在的电子表格、网页浏览器和竞争条件的问题。
+在接下来的几章中,我们将探讨如何将我们的理想化领域模型连接到外部状态。
+
((("minimum viable product")))
We expect to be working in an agile manner, so our priority is to get to a
minimum viable product as quickly as possible. In our case, that's going to be
a web API. In a real project, you might dive straight in with some end-to-end
tests and start plugging in a web framework, test-driving things outside-in.
+我们希望以敏捷的方式开展工作,因此我们的首要任务是尽快实现一个最小可行产品。在我们的案例中,这将是一个 Web API。在实际项目中,
+你可能会直接从一些端到端测试入手,并开始集成一个 Web 框架,以从外到内进行测试驱动开发。
+
But we know that, no matter what, we're going to need some form of persistent
storage, and this is a textbook, so we can allow ourselves a tiny bit more
bottom-up development and start to think about storage and databases.
+但我们知道,无论如何,我们都会需要某种形式的持久化存储。而且这是一本教科书,所以我们可以稍微允许自己进行一些自下而上的开发,
+开始考虑存储和数据库的问题。
+
=== Some Pseudocode: What Are We Going to Need?
+一些伪代码:我们需要什么?
When we build our first API endpoint, we know we're going to have
some code that looks more or less like the following.
+当我们构建第一个 API 端点时,我们知道会有一些代码大致如下所示。
+
[[api_endpoint_pseudocode]]
-.What our first API endpoint will look like
+.What our first API endpoint will look like(我们的第一个 API 端点会是什么样子)
====
[role="skip"]
[source,python]
@@ -87,15 +114,22 @@ NOTE: We've used Flask because it's lightweight, but you don't need
to be a Flask user to understand this book. In fact, we'll show you how
to make your choice of framework a minor detail.
((("Flask framework")))
+我们使用了 Flask,因为它很轻量化,但你并不需要是 Flask 的用户就能理解本书的内容。
+实际上,我们会向你展示如何让框架的选择成为一个无足轻重的细节。
We'll need a way to retrieve batch info from the database and instantiate our domain
model objects from it, and we'll also need a way of saving them back to the
database.
+我们需要一种方法从数据库中检索批次信息,并据此实例化我们的领域模型对象,同时也需要一种方法将这些对象保存回数据库。
+
_What? Oh, "gubbins" is a British word for "stuff." You can just ignore that. It's pseudocode, OK?_
+_什么?哦,“gubbins”是一个英国词,意思是“东西”。你可以忽略它。这只是伪代码,好吗?_
+
=== Applying the DIP to Data Access
+将依赖倒置原则 (DIP) 应用于数据访问
((("layered architecture")))
((("data access, applying dependency inversion principle to")))
@@ -103,9 +137,12 @@ As mentioned in the <>, a layered architecture is a
approach to structuring a system that has a UI, some logic, and a database (see
<>).
+正如在 <> 中提到的,分层架构是一种常见的方法,用于构建具有用户界面、逻辑和数据库的系统
+(参见 <>)。
+
[role="width-75"]
[[layered_architecture2]]
-.Layered architecture
+.Layered architecture(分层架构)
image::images/apwp_0202.png[]
@@ -114,6 +151,9 @@ Model-View-Controller (MVC). In any case, the aim is to keep the layers
separate (which is a good thing), and to have each layer depend only on the one
below it.
+Django 的模型-视图-模板(Model-View-Template, MVT)结构与此密切相关,模型-视图-控制器(Model-View-Controller, MVC)也是如此。
+无论是哪种情况,其目标都是将各层分离(这是一件好事),并使每一层仅依赖其下方的那一层。
+
((("dependencies", "none in domain model")))
But we want our domain model to have __no dependencies whatsoever__.footnote:[
I suppose we mean "no stateful dependencies." Depending on a helper library is
@@ -121,14 +161,21 @@ fine; depending on an ORM or a web framework is not.]
We don't want infrastructure concerns bleeding over into our domain model and
slowing our unit tests or our ability to make changes.
+但我们希望我们的领域模型 __完全没有任何依赖__。脚注:[我想我们指的是“没有有状态的依赖”。
+依赖一个辅助库是可以的;但依赖一个 ORM 或 Web 框架则不行。]
+我们不希望基础设施的相关问题渗透到领域模型中,从而降低单元测试的速度或限制我们进行更改的能力。
+
((("onion architecture")))
Instead, as discussed in the introduction, we'll think of our model as being on the
"inside," and dependencies flowing inward to it; this is what people sometimes call
_onion architecture_ (see <>).
+相反,正如在引言中讨论的那样,我们将把我们的模型视为处于“内部”,依赖关系向内流向它;
+这有时被称为 _洋葱架构_(参见 <>)。
+
[role="width-75"]
[[onion_architecture]]
-.Onion architecture
+.Onion architecture(洋葱架构)
image::images/apwp_0203.png[]
[role="image-source"]
----
@@ -149,13 +196,16 @@ image::images/apwp_0203.png[]
----
[role="nobreakinside less_space"]
-.Is This Ports and Adapters?
+.Is This Ports and Adapters?(这是端口和适配器模式吗?)
****
If you've been reading about architectural patterns, you may be asking
yourself questions like this:
+如果你一直在阅读有关架构模式的内容,你可能会问自己这样的问题:
+
____
_Is this ports and adapters? Or is it hexagonal architecture? Is that the same as onion architecture? What about the clean architecture? What's a port, and what's an adapter? Why do you people have so many words for the same thing?_
+_这是端口与适配器架构吗?还是六边形架构?这和洋葱架构是一样的吗?那“整洁架构”又是什么?什么是端口,什么是适配器?你们为什么用这么多词来描述同一件事?_
____
((("dependency inversion principle")))
@@ -166,28 +216,42 @@ dependency inversion principle: high-level modules (the domain) should
not depend on low-level ones (the infrastructure).footnote:[Mark Seemann has
https://oreil.ly/LpFS9[an excellent blog post] on the topic.]
+尽管有些人喜欢在细节上挑剔这些名称的区别,但它们基本上是同一件事的不同叫法,它们都归结于依赖倒置原则:高层模块(领域)不应该
+依赖低层模块(基础设施)。脚注:[Mark Seemann 在这个主题上写了一篇https://oreil.ly/LpFS9[出色的博客文章]。]
+
We'll get into some of the nitty-gritty around "depending on abstractions,"
and whether there is a Pythonic equivalent of interfaces,
<>. See also <>.
+
+我们将在本书的 <> 部分深入探讨一些关于“依赖抽象”的细节,以及是否存在 _Python_ 式的接口等价物。
+另请参见 <>。
****
=== Reminder: Our Model
+提醒:我们的模型
((("domain model", id="ix_domod")))
Let's remind ourselves of our domain model (see <>):
an allocation is the concept of linking an `OrderLine` to a `Batch`. We're
storing the allocations as a collection on our `Batch` object.
+让我们回顾一下我们的领域模型(参见 <>):
+“分配”是将一个 `OrderLine` 关联到一个 `Batch` 的概念。
+我们将分配存储为 `Batch` 对象上的一个集合。
+
[[model_diagram_reminder]]
-.Our model
+.Our model(我们的模型)
image::images/apwp_0103.png[]
// see chapter_01_domain_model for diagram source
Let's see how we might translate this to a relational database.
+让我们看看如何将其转换为关系型数据库。
+
==== The "Normal" ORM Way: Model Depends on ORM
+“常规” ORM 方法:模型依赖于 ORM
((("SQL", "generating for domain model objects")))
((("domain model", "translating to relational database", "normal ORM way, model depends on ORM")))
@@ -195,11 +259,15 @@ These days, it's unlikely that your team members are hand-rolling their own SQL
Instead, you're almost certainly using some kind of framework to generate
SQL for you based on your model objects.
+如今,你的团队成员很可能不再手写 SQL 查询了。相反,你几乎肯定会使用某种框架,根据模型对象为你生成 SQL。
+
((("object-relational mappers (ORMs)")))
These frameworks are called _object-relational mappers_ (ORMs) because they exist to
bridge the conceptual gap between the world of objects and domain modeling and
the world of databases and relational algebra.
+这些框架被称为 _对象关系映射器_(ORM),因为它们的存在是为了弥合对象和领域建模的世界与数据库和关系代数的世界之间的概念差距。
+
((("persistence ignorance")))
The most important thing an ORM gives us is _persistence ignorance_: the idea
that our fancy domain model doesn't need to know anything about how data is
@@ -208,14 +276,22 @@ on particular database technologies.footnote:[In this sense, using an ORM is
already an example of the DIP. Instead of depending on hardcoded SQL, we depend
on an abstraction, the ORM. But that's not enough for us—not in this book!]
+ORM 提供给我们的最重要的功能是 _持久化无感(persistence ignorance)_:即我们的高级领域模型无需了解数据如何加载或持久化。
+这样可以使我们的领域模型避免直接依赖特定的数据库技术。
+脚注:[从这个角度来看,使用 ORM 本身已经是依赖倒置原则(DIP)的一个示例。
+与其依赖硬编码的 SQL,我们依赖的是一个抽象层,即 ORM。
+但这对于我们来说还不够——至少在本书中还不足够!]
+
((("object-relational mappers (ORMs)", "SQLAlchemy, model depends on ORM")))
((("SQLAlchemy", "declarative syntax, model depends on ORM")))
But if you follow the typical SQLAlchemy tutorial, you'll end up with something
like this:
+但如果你按照典型的 SQLAlchemy 教程操作,你最终会得到如下代码:
+
[[typical_sqlalchemy_example]]
-.SQLAlchemy "declarative" syntax, model depends on ORM (orm.py)
+.SQLAlchemy "declarative" syntax, model depends on ORM (orm.py)(SQLAlchemy 的“声明式”语法,模型依赖于 ORM)
====
[role="skip"]
[source,python]
@@ -247,8 +323,12 @@ Can we really say this model is ignorant of the database? How can it be
separate from storage concerns when our model properties are directly coupled
to database columns?
+即使你不了解 SQLAlchemy,也能看出我们原本干净的模型现在充满了对 ORM 的依赖,而且看起来开始非常难看。
+我们真的还能说这个模型对数据库是无感知的吗?当我们的模型属性直接与数据库列耦合时,
+它怎么可能与存储问题分离?
+
[role="nobreakinside less_space"]
-.Django's ORM Is Essentially the Same, but More Restrictive
+.Django's ORM Is Essentially the Same, but More Restrictive(Django 的 ORM 本质上是相同的,但限制更多)
****
((("Django", "ORM example")))
@@ -256,8 +336,10 @@ to database columns?
If you're more used to Django, the preceding "declarative" SQLAlchemy snippet
translates to something like this:
+如果你更熟悉 Django,上述“声明式”的 SQLAlchemy 代码片段可以转换成类似如下的内容:
+
[[django_orm_example]]
-.Django ORM example
+.Django ORM example(Django ORM 示例)
====
[source,python]
[role="skip"]
@@ -279,15 +361,20 @@ The point is the same--our model classes inherit directly from ORM
classes, so our model depends on the ORM. We want it to be the other
way around.
+重点是一样的——我们的模型类直接继承自 ORM 类,因此我们的模型依赖于 ORM。而我们希望情况正好相反。
+
Django doesn't provide an equivalent for SQLAlchemy's classical mapper,
but see <> for examples of how to apply dependency
inversion and the Repository pattern to Django.
+Django 不提供与 SQLAlchemy 的经典映射器等价的功能,但请参阅 <>,了解如何将依赖倒置原则和仓储模式应用于 Django 的示例。
+
****
==== Inverting the Dependency: ORM Depends on Model
+依赖倒置:ORM 依赖于模型
((("mappers")))
((("classical mapping")))
@@ -300,9 +387,12 @@ to define your schema separately, and to define an explicit _mapper_ for how to
between the schema and our domain model, what SQLAlchemy calls a
https://oreil.ly/ZucTG[classical mapping]:
+幸运的是,这并不是使用 SQLAlchemy 的唯一方法。另一种方式是单独定义你的模式,并明确定义一个 _映射器_(mapper),
+用于在模式和我们的领域模型之间进行转换,SQLAlchemy 将其称为 https://oreil.ly/ZucTG[经典映射]:
+
[role="nobreakinside less_space"]
[[sqlalchemy_classical_mapper]]
-.Explicit ORM mapping with SQLAlchemy Table objects (orm.py)
+.Explicit ORM mapping with SQLAlchemy Table objects (orm.py)(使用 SQLAlchemy 的 Table 对象进行显式 ORM 映射)
====
[source,python]
----
@@ -331,15 +421,19 @@ def start_mappers():
<1> The ORM imports (or "depends on" or "knows about") the domain model, and
not the other way around.
+ORM 导入(或“依赖于”或“了解”)领域模型,而不是相反的方向。
<2> We define our database tables and columns by using SQLAlchemy's
abstractions.footnote:[Even in projects where we don't use an ORM, we
often use SQLAlchemy alongside Alembic to declaratively create
schemas in Python and to manage migrations, connections,
and sessions.]
+我们使用 SQLAlchemy 的抽象来定义数据库表和列。脚注:[即使在没有使用 ORM 的项目中,我们通常也会结合使用 SQLAlchemy 和 Alembic,
+在 _Python_ 中以声明式创建模式,并管理迁移、连接和会话。]
<3> When we call the `mapper` function, SQLAlchemy does its magic to bind
our domain model classes to the various tables we've defined.
+当我们调用 `mapper` 函数时,SQLAlchemy 施展它的魔法,将我们的领域模型类绑定到我们定义的各个表。
// TODO: replace mapper() with registry.map_imperatively()
// https://docs.sqlalchemy.org/en/14/orm/mapping_styles.html?highlight=sqlalchemy#orm-imperative-mapping
@@ -349,6 +443,9 @@ easily load and save domain model instances from and to the database. But if
we never call that function, our domain model classes stay blissfully
unaware of the database.
+最终的结果是,如果我们调用 `start_mappers`,我们将能够轻松地从数据库加载和保存领域模型实例。
+但如果我们从未调用那个函数,我们的领域模型类将完全不需要了解数据库的存在。
+
// IDEA: add a note about mapper being maybe-deprecated, but link to
// the mailing list post where mike shows how to reimplement it manually.
@@ -356,13 +453,18 @@ This gives us all the benefits of SQLAlchemy, including the ability to use
`alembic` for migrations, and the ability to transparently query using our
domain classes, as we'll see.
+这为我们带来了 SQLAlchemy 的所有好处,包括使用 `alembic` 进行迁移的能力,
+以及使用领域类进行透明查询的能力,正如我们将会看到的那样。
+
((("object-relational mappers (ORMs)", "ORM depends on the data model", "testing the ORM")))
When you're first trying to build your ORM config, it can be useful to write
tests for it, as in the following example:
+当你第一次尝试构建 ORM 配置时,编写测试可能会很有用,例如以下示例所示:
+
[[orm_tests]]
-.Testing the ORM directly (throwaway tests) (test_orm.py)
+.Testing the ORM directly (throwaway tests) (test_orm.py)(直接测试 ORM(临时测试))
====
[source,python]
----
@@ -398,6 +500,9 @@ def test_orderline_mapper_can_save_lines(session):
pytest will inject them to the tests that need them by looking at their
function arguments. In this case, it's a SQLAlchemy database session.
((("pytest", "session argument")))
+如果你没用过 pytest,那么这个测试中的 `session` 参数需要解释一下。对于本书来说,你不必担心 pytest 或其夹具(fixtures)的细节,
+但简短的解释是:你可以将测试中的通用依赖定义为“夹具”,而 pytest 会通过检查测试函数的参数,
+将它们注入到需要的测试中。在这个例子中,`session` 是一个 SQLAlchemy 数据库会话。
////
[SG] I set up the conftest to have a session, and could only get the tests to
@@ -414,12 +519,19 @@ only a small additional step to implement another abstraction called the
Repository pattern, which will be easier to write tests against and will
provide a simple interface for faking out later in tests.
+你可能不会保留这些测试——正如你即将看到的,一旦你完成了 ORM 和领域模型的依赖倒置,
+再实现另一个称为仓储模式(Repository pattern)的抽象就只需迈出一小步。
+该模式将更容易编写测试,并提供一个简单的接口,以便在之后的测试中方便地进行模拟。
+
But we've already achieved our objective of inverting the traditional
dependency: the domain model stays "pure" and free from infrastructure
concerns. We could throw away SQLAlchemy and use a different ORM, or a totally
different persistence system, and the domain model doesn't need to change at
all.
+但我们已经实现了依赖倒置这一目标:领域模型保持“纯粹”,不涉及基础设施问题。我们可以抛弃 SQLAlchemy,
+使用不同的 ORM,甚至是完全不同的持久化系统,而领域模型完全不需要做任何改变。
+
Depending on what you're doing in your domain model, and especially if you
stray far from the OO paradigm, you may find it increasingly hard to get the
@@ -429,12 +541,18 @@ maintainers, and to Mike Bayer in particular.] As so often happens with
architectural decisions, you'll need to consider a trade-off. As the
Zen of Python says, "Practicality beats purity!"
+根据你在领域模型中执行的操作,尤其是当你偏离面向对象(OO)范式时,你可能会发现越来越难以让 ORM 产生满足你需求的准确行为,
+这时可能需要修改领域模型。脚注:[特别感谢极其乐于助人的 SQLAlchemy 维护人员,尤其是 Mike Bayer。] 正如架构决策中经常发生的事情,
+你需要权衡利弊。正如 _Python_ 之禅所说:“实用性胜过纯粹性!”
+
((("SQLAlchemy", "using directly in API endpoint")))
At this point, though, our API endpoint might look something like
the following, and we could get it to work just fine:
+不过,此时我们的 API 端点可能看起来如下所示,而且我们应该可以正常使其工作:
+
[[api_endpoint_with_session]]
-.Using SQLAlchemy directly in our API endpoint
+.Using SQLAlchemy directly in our API endpoint(在我们的 API 端点中直接使用 SQLAlchemy)
====
[role="skip"]
[source,python]
@@ -470,18 +588,23 @@ add a try finally to close the session
////
=== Introducing the Repository Pattern
+引入仓储模式
((("Repository pattern", id="ix_Repo")))
((("domain model", startref="ix_domod")))
The _Repository_ pattern is an abstraction over persistent storage. It hides the
boring details of data access by pretending that all of our data is in memory.
+_仓储_ 模式是一种对持久存储的抽象。它通过假装所有数据都在内存中,隐藏了数据访问中乏味的细节。
+
If we had infinite memory in our laptops, we'd have no need for clumsy databases.
Instead, we could just use our objects whenever we liked. What would that look
like?
+如果我们的笔记本电脑拥有无限的内存,就不需要笨重的数据库了。我们可以随时使用我们的对象。那么这会是什么样子呢?
+
[[all_my_data]]
-.You have to get your data from somewhere
+.You have to get your data from somewhere(你必须从某个地方获取数据)
====
[role="skip"]
[source,python]
@@ -505,8 +628,12 @@ find them again. Our in-memory data would let us add new objects, just like a
list or a set. Because the objects are in memory, we never need to call a
`.save()` method; we just fetch the object we care about and modify it in memory.
+即使我们的对象在内存中,我们仍需要将它们放在 _某个地方_,以便能够再次找到它们。我们的内存数据允许我们像使用列表或集合那样添加新对象。
+由于对象在内存中,我们完全不需要调用 `.save()` 方法;只需获取我们关心的对象并在内存中修改它即可。
+
==== The Repository in the Abstract
+抽象中的仓储模式
((("Repository pattern", "simplest possible repository")))
((("Unit of Work pattern")))
@@ -520,11 +647,19 @@ We stick rigidly to using these methods for data access in our domain and our
service layer. This self-imposed simplicity stops us from coupling our domain
model to the database.
+最简单的仓库只包含两个方法:`add()` 用于将新项目加入仓库,`get()` 用于返回先前添加的项目。
+脚注:[ 你可能会想,“那 `list`、`delete` 或 `update` 呢?” 然而,在理想的情况下,
+我们一次只对模型对象进行修改,而删除通常以软删除的方式处理——比如 `batch.cancel()`。
+最后,更新操作由工作单元(Unit of Work)模式处理,如你将在 <> 中看到的那样。]
+我们严格坚持使用这些方法在领域层和服务层中进行数据访问。这种自我施加的简化能够防止我们的领域模型与数据库耦合。
+
((("abstract base classes (ABCs)", "ABC for the repository")))
Here's what an abstract base class (ABC) for our repository would look like:
+以下是我们的仓库的一个抽象基类(Abstract Base Class, ABC)的样子:
+
[[abstract_repo]]
-.The simplest possible repository (repository.py)
+.The simplest possible repository (repository.py)(最简单的仓储)
====
[source,python]
----
@@ -547,13 +682,17 @@ class AbstractRepository(abc.ABC):
may be), be running helpers like `pylint` and `mypy`.]
((("@abc.abstractmethod")))
((("abstract methods")))
+_Python_ 提示:`@abc.abstractmethod` 是让抽象基类(ABCs)在 _Python_ 中真正“起作用”的为数不多的机制之一。
+如果一个类没有实现其父类中定义的所有 `abstractmethods`,_Python_ 将拒绝让你实例化该类。
+脚注:[如果想真正充分利用抽象基类的好处(如果它们有的话),可以运行如 `pylint` 和 `mypy` 这样的辅助工具。]
<2> `raise NotImplementedError` is nice, but it's neither necessary nor sufficient.
In fact, your abstract methods can have real behavior that subclasses
can call out to, if you really want.
+`raise NotImplementedError` 很好用,但它既不是必要的,也不是充分的。实际上,如果你确实需要,你的抽象方法甚至可以包含实际的行为,供子类调用。
[role="pagebreak-before less_space"]
-.Abstract Base Classes, Duck Typing, and Protocols
+.Abstract Base Classes, Duck Typing, and Protocols(抽象基类、鸭子类型和协议)
*******************************************************************************
((("abstract base classes (ABCs)", "using duck typing and protocols instead of")))
@@ -561,6 +700,8 @@ class AbstractRepository(abc.ABC):
We're using abstract base classes in this book for didactic reasons: we hope
they help explain what the interface of the repository abstraction is.
+我们在本书中使用抽象基类是出于教学目的:我们希望它能帮助说明仓库抽象接口的定义。
+
((("duck typing")))
In real life, we've sometimes found ourselves deleting ABCs from our production
code, because Python makes it too easy to ignore them, and they end up
@@ -568,15 +709,23 @@ unmaintained and, at worst, misleading. In practice we often just rely on
Python's duck typing to enable abstractions. To a Pythonista, a repository is
_any_ object that has pass:[add(thing)] and pass:[get(id)] methods.
+在实际工作中,我们有时会从生产代码中删除抽象基类(ABCs),因为 _Python_ 让忽略它们变得太容易了,结果这些类往往无人维护,
+甚至在最坏的情况下会引起误导。实际上,我们经常只是依赖 _Python_ 的鸭子类型来实现抽象。对于一个 _Python_ 开发者来说,
+一个仓库就是 _任何_ 具有 pass:[add(thing)] 和 pass:[get(id)] 方法的对象。
+
((("PEP 544 protocols")))
An alternative to look into is https://oreil.ly/q9EPC[PEP 544 protocols].
These give you typing without the possibility of inheritance, which "prefer
composition over inheritance" fans will particularly like.
+一种可以考虑的替代方案是 https://oreil.ly/q9EPC[PEP 544 协议]。
+它们提供了类型支持,但没有继承的可能性,对于那些提倡“组合优于继承”的爱好者来说,这将特别受欢迎。
+
*******************************************************************************
==== What Is the Trade-Off?
+什么是权衡取舍?
[quote, Rich Hickey]
@@ -584,37 +733,53 @@ ____
You know they say economists know the price of everything and the value of
nothing? Well, programmers know the benefits of everything and the trade-offs
of nothing.
+
+你知道人们常说经济学家知道一切东西的价格,却不知道它们的价值吗?那么,程序员则是知道一切事物的好处,却不了解它们的权衡取舍。
____
((("Repository pattern", "trade-offs")))
Whenever we introduce an architectural pattern in this book, we'll always
ask, "What do we get for this? And what does it cost us?"
+每当我们在本书中引入一种架构模式时,我们都会问:“我们能从中获得什么?而它的代价是什么?”
+
Usually, at the very least, we'll be introducing an extra layer of abstraction,
and although we may hope it will reduce complexity overall, it does add
complexity locally, and it has a cost in terms of the raw numbers of moving parts and
ongoing maintenance.
+通常情况下,至少我们会引入一个额外的抽象层。尽管我们可能希望它能整体上降低复杂性,但它确实会在局部增加复杂性,
+同时在可变部分的数量和持续维护方面也会付出代价。
+
The Repository pattern is probably one of the easiest choices in the book, though,
if you're already heading down the DDD and dependency inversion route. As far
as our code is concerned, we're really just swapping the SQLAlchemy abstraction
(`session.query(Batch)`) for a different one (`batches_repo.get`) that we
designed.
+如果你已经选择了领域驱动设计(DDD)和依赖倒置的路径,那么仓库模式可能是本书中最容易选择的模式之一。
+对于我们的代码来说,我们实际上只是将 SQLAlchemy 的抽象(`session.query(Batch)`)替换为一个我们自己设计的抽象(`batches_repo.get`)。
+
We will have to write a few lines of code in our repository class each time we
add a new domain object that we want to retrieve, but in return we get a
simple abstraction over our storage layer, which we control. The Repository pattern would make
it easy to make fundamental changes to the way we store things (see
<>), and as we'll see, it is easy to fake out for unit tests.
+每次我们新增一个需要检索的领域对象时,都需要在我们的仓库类中编写几行代码,但作为回报,我们获得了一个简单的、由我们掌控的存储层抽象。
+仓库模式让我们可以轻松对存储方式进行根本性的更改(参见 <>), 并且正如我们将会看到的,它也很容易在单元测试中伪造(fake out)。
+
((("domain driven design (DDD)", "Repository pattern and")))
In addition, the Repository pattern is so common in the DDD world that, if you
do collaborate with programmers who have come to Python from the Java and C#
worlds, they're likely to recognize it. <> illustrates the pattern.
+此外,仓库模式在 DDD 世界中非常常见,因此如果你与来自 Java 和 C# 世界的程序员合作,他们可能会认出这个模式。
+<> 展示了这一模式的示意图。
+
[role="width-60"]
[[repository_pattern_diagram]]
-.Repository pattern
+.Repository pattern(仓储模式)
image::images/apwp_0205.png[]
[role="image-source"]
----
@@ -646,13 +811,17 @@ integration test, since we're checking that our code (the repository) is
correctly integrated with the database; hence, the tests tend to mix
raw SQL with calls and assertions on our own code.
+一如既往,我们从测试开始。这可能会被归类为集成测试,因为我们要检查我们的代码(仓库)是否正确地与数据库集成;
+因此,这些测试往往会将原始 SQL 和对我们自己代码的调用与断言结合起来。
+
TIP: Unlike the ORM tests from earlier, these tests are good candidates for
staying part of your codebase longer term, particularly if any parts of
your domain model mean the object-relational map is nontrivial.
+与之前的 ORM 测试不同,这些测试非常适合长期保留在你的代码库中,特别是当你的领域模型的某些部分使对象关系映射变得不那么简单时。
[[repo_test_save]]
-.Repository test for saving an object (test_repository.py)
+.Repository test for saving an object (test_repository.py)(测试仓储保存对象的方法)
====
[source,python]
----
@@ -671,22 +840,27 @@ def test_repository_can_save_a_batch(session):
====
<1> `repo.add()` is the method under test here.
+`repo.add()` 是这里的被测试方法。
<2> We keep the `.commit()` outside of the repository and make
it the responsibility of the caller. There are pros and cons for
this; some of our reasons will become clearer when we get to
<>.
+我们将 `.commit()` 保留在仓库之外,并将其作为调用者的职责。这么做有利有弊;当我们进入 <> 时,一些原因会变得更加清晰。
<3> We use the raw SQL to verify that the right data has been saved.
+我们使用原始 SQL 来验证是否保存了正确的数据。
((("SQL", "repository test for retrieving complex object")))
((("Repository pattern", "testing the repository with retrieving a complex object")))
The next test involves retrieving batches and allocations, so it's more
complex:
+下一个测试涉及检索批次和分配,因此它更复杂一些:
+
[[repo_test_retrieve]]
-.Repository test for retrieving a complex object (test_repository.py)
+.Repository test for retrieving a complex object (test_repository.py)(测试仓储检索复杂对象的方法)
====
[source,python]
----
@@ -727,17 +901,21 @@ def test_repository_can_retrieve_a_batch_with_allocations(session):
<1> This tests the read side, so the raw SQL is preparing data to be read
by the `repo.get()`.
+这个测试关注的是读取部分,因此原始 SQL 用于准备将由 `repo.get()` 读取的数据。
<2> We'll spare you the details of `insert_batch` and `insert_allocation`;
the point is to create a couple of batches, and, for the
batch we're interested in, to have one existing order line allocated to it.
+我们不会详细说明 `insert_batch` 和 `insert_allocation` 的细节;重点是创建几个批次,并为我们感兴趣的那个批次分配一个已有的订单项。
<3> And that's what we verify here. The first `assert ==` checks that the
types match, and that the reference is the same (because, as you remember,
`Batch` is an entity, and we have a custom ++__eq__++ for it).
+这正是我们在这里验证的。第一个 `assert ==` 检查类型是否匹配,以及引用是否相同(因为,如你所记得的,`Batch` 是一个实体,我们为它定义了自定义的 ++__eq__++ 方法)。
<4> So we also explicitly check on its major attributes, including
`._allocations`, which is a Python set of `OrderLine` value objects.
+因此,我们还明确检查了它的主要属性,包括 `._allocations`,这是一个由 `OrderLine` 值对象组成的 _Python_ 集合。
((("Repository pattern", "typical repository")))
Whether or not you painstakingly write tests for every model is a judgment
@@ -747,12 +925,17 @@ at all, if they all follow a similar pattern. In our case, the ORM config
that sets up the `._allocations` set is a little complex, so it merited a
specific test.
+是否为每个模型都细致地编写测试是一个主观判断。一旦你为一个类完成了创建/修改/保存的测试,你可能会满意于仅为其他类编写一个简单的往返测试,
+或者如果它们都遵循类似的模式,甚至可以不编写任何测试。在我们的案例中,设置 `._allocations` 集合的 ORM 配置有些复杂,因此值得编写一个专门的测试。
+
You end up with something like this:
+你最终会得到如下内容:
+
[[batch_repository]]
-.A typical repository (repository.py)
+.A typical repository (repository.py)(一个典型的仓储)
====
[source,python]
----
@@ -777,8 +960,10 @@ class SqlAlchemyRepository(AbstractRepository):
((("APIs", "using repository directly in API endpoint")))
And now our Flask endpoint might look something like the following:
+现在我们的 Flask 端点可能会看起来如下:
+
[[api_endpoint_with_repo]]
-.Using our repository directly in our API endpoint
+.Using our repository directly in our API endpoint(在我们的 API 端点中直接使用仓储)
====
[role="skip"]
[source,python]
@@ -797,7 +982,7 @@ def allocate_endpoint():
====
[role="nobreakinside less_space"]
-.Exercise for the Reader
+.Exercise for the Reader(留给读者的练习)
******************************************************************************
((("SQL", "ORM and Repository pattern as abstractions in front of")))
@@ -809,22 +994,32 @@ in front of raw SQL, so using one behind the other isn't really necessary. Why
not have a go at implementing our repository without using the ORM?
You'll find the code https://github.com/cosmicpython/code/tree/chapter_02_repository_exercise[on GitHub].
+前几天我们在一次 DDD 会议上遇到了一位朋友,他说:“我已经有 10 年没用过 ORM 了。”仓库模式和 ORM 都是原始 SQL 的抽象,
+因此在一个抽象后面再使用另一个抽象并不是必须的。为什么不尝试一下不使用 ORM 来实现我们的仓库呢?
+你可以在 https://github.com/cosmicpython/code/tree/chapter_02_repository_exercise[GitHub] 上找到相关代码。
+
We've left the repository tests, but figuring out what SQL to write is up
to you. Perhaps it'll be harder than you think; perhaps it'll be easier.
But the nice thing is, the rest of your application just doesn't care.
+我们保留了仓库的测试,但具体要写哪些 SQL 语句就交给你来决定了。也许这会比你想的更难,也许会更简单。
+但很棒的一点是,你的应用程序的其他部分并不关心这些。
+
******************************************************************************
=== Building a Fake Repository for Tests Is Now Trivial!
+为测试构建一个假的仓库现在变得非常简单!
((("Repository pattern", "building fake repository for tests")))
((("set, fake repository as wrapper around")))
Here's one of the biggest benefits of the Repository pattern:
+以下是仓库模式的最大好处之一:
+
[[fake_repository]]
-.A simple fake repository using a set (repository.py)
+.A simple fake repository using a set (repository.py)(使用集合实现的一个简单的假仓储)
====
[role="skip"]
[source,python]
@@ -847,11 +1042,15 @@ class FakeRepository(AbstractRepository):
Because it's a simple wrapper around a `set`, all the methods are one-liners.
+由于它是对一个 `set` 的简单封装,所有方法都可以用一行代码实现。
+
Using a fake repo in tests is really easy, and we have a simple
abstraction that's easy to use and reason about:
+在测试中使用一个假的仓库非常简单,而且我们有一个易于使用且便于理解的简单抽象:
+
[[fake_repository_example]]
-.Example usage of fake repository (test_api.py)
+.Example usage of fake repository (test_api.py)(假仓储的示例用法)
====
[role="skip"]
[source,python]
@@ -862,14 +1061,18 @@ fake_repo = FakeRepository([batch1, batch2, batch3])
You'll see this fake in action in the next chapter.
+你将在下一章中看到这个假的仓库的实际应用。
+
TIP: Building fakes for your abstractions is an excellent way to get design
feedback: if it's hard to fake, the abstraction is probably too
complicated.
+为你的抽象构建假的实现是获取设计反馈的极好方式:如果难以伪造,那么这个抽象可能过于复杂。
[[what_is_a_port_and_what_is_an_adapter]]
=== What Is a Port and What Is an Adapter, in Python?
+在 _Python_ 中,什么是端口(Port),什么是适配器(Adapter)?
((("ports", "defined")))
((("adapters", "defined")))
@@ -878,11 +1081,17 @@ we want to focus on is dependency inversion, and the specifics of the
technique you use don't matter too much. Also, we're aware that different
people use slightly different definitions.
+我们不想在术语上花费太多精力,因为我们主要关注的是依赖倒置,而你使用的具体技术的细节并不是那么重要。
+同时,我们也清楚,不同的人对这些术语的定义可能会略有不同。
+
Ports and adapters came out of the OO world, and the definition we hold onto
is that the _port_ is the _interface_ between our application and whatever
it is we wish to abstract away, and the _adapter_ is the _implementation_
behind that interface or abstraction.
+端口(Ports)和适配器(Adapters)来源于面向对象(OO)世界,我们所坚持的定义是:**端口**(Port)是我们的应用程序与我们
+希望抽象化的事物之间的**接口**,而**适配器**(Adapter)是该接口或抽象背后的**实现**。
+
((("interfaces, Python and")))
((("duck typing", "for ports")))
((("abstract base classes (ABCs)", "using for ports")))
@@ -892,12 +1101,19 @@ abstract base class, that's the port. If not, the port is just the duck type
that your adapters conform to and that your core application expects—the
function and method names in use, and their argument names and types.
+在 _Python_ 中没有真正意义上的接口,因此尽管通常可以很容易地识别适配器,但定义端口可能会更困难。
+如果你使用的是抽象基类(ABC),那么这就是你的端口。如果没有使用抽象基类,那么端口就是你的适配器遵守的鸭子类型,
+以及你的核心应用程序所期望的类型——也就是实际使用的函数和方法名称,以及它们的参数名称和类型。
+
Concretely, in this chapter, `AbstractRepository` is the port, and
`SqlAlchemyRepository` and `FakeRepository` are the adapters.
+具体来说,在本章中,`AbstractRepository` 是端口,而 `SqlAlchemyRepository` 和 `FakeRepository` 则是适配器。
+
=== Wrap-Up
+总结
((("Repository pattern", "and persistence ignorance, trade-offs")))
((("persistence ignorance", "trade-offs")))
@@ -908,10 +1124,16 @@ to be built this way; only sometimes does the complexity of the app and domain
make it worth investing the time and effort in adding these extra layers of
indirection.
+记住 Rich Hickey 的那句名言,在每一章中,我们都会总结我们引入的每种架构模式的成本和收益。
+我们希望明确一点,我们并不是说每个应用程序都需要以这种方式构建;只有当应用程序和领域的复杂性足够高时,
+才值得投入时间和精力来添加这些额外的间接层。
+
With that in mind, <> shows
some of the pros and cons of the Repository pattern and our persistence-ignorant
model.
+考虑到这一点,<> 展示了仓库模式及我们的持久化无关模型的一些优点和缺点。
+
////
[SG] is it worth mentioning that the repository is specifically intended for add and get
of our domain model objects, rather than something used to add and get any old data
@@ -920,28 +1142,33 @@ which you might call a DAO. Repository is more close to the business domain.
[[chapter_02_repository_tradeoffs]]
[options="header"]
-.Repository pattern and persistence ignorance: the trade-offs
+.Repository pattern and persistence ignorance: the trade-offs(仓储模式与持久化无关性的权衡)
|===
-|Pros|Cons
+|Pros(优点)|Cons(缺点)
a|
* We have a simple interface between persistent storage and our domain model.
+我们在持久化存储和领域模型之间有一个简单的接口。
* It's easy to make a fake version of the repository for unit testing, or to
swap out different storage solutions, because we've fully decoupled the model
from infrastructure concerns.
+为单元测试制作一个仓库的假版本非常容易,或者更换不同的存储解决方案也很方便,因为我们已经完全将模型与基础设施的关切解耦了。
* Writing the domain model before thinking about persistence helps us focus on
the business problem at hand. If we ever want to radically change our approach,
we can do that in our model, without needing to worry about foreign keys
or migrations until later.
+在考虑持久化之前编写领域模型可以帮助我们专注于手头的业务问题。如果我们想彻底改变我们的解决方法,我们可以在模型中进行,而不需要在初期就为外键或迁移操心。
* Our database schema is really simple because we have complete control over
how we map our objects to tables.
+我们的数据库模式非常简单,因为我们完全可以控制如何将对象映射到表中。
a|
* An ORM already buys you some decoupling. Changing foreign keys might be hard,
but it should be pretty easy to swap between MySQL and Postgres if you
ever need to.
+ORM 已经为你提供了一定程度的解耦。更改外键可能会比较困难,但如果你需要在 MySQL 和 Postgres 之间切换,应该会相对容易一些。
////
[KP] I always found this benefit of ORMs rather weak. In the rare cases when I
@@ -952,10 +1179,12 @@ Postgres fields) you usually lose the portability.
* Maintaining ORM mappings by hand requires extra work and extra code.
+手动维护 ORM 映射需要额外的工作量和代码量。
* Any extra layer of indirection always increases maintenance costs and
adds a "WTF factor" for Python programmers who've never seen the Repository pattern
before.
+任何额外的间接层都会增加维护成本,并对那些从未见过仓库模式的 _Python_ 程序员增加一种“WTF 因素”(困惑感)。
|===
<> shows the basic thesis: yes, for simple
@@ -963,8 +1192,12 @@ cases, a decoupled domain model is harder work than a simple ORM/ActiveRecord
pattern.footnote:[Diagram inspired by a post called
https://oreil.ly/fQXkP["Global Complexity, Local Simplicity"] by Rob Vens.]
+<> 展示了基本的论点:是的,对于简单的情况,一个解耦的领域模型比一个简单的 ORM/ActiveRecord 模式要更费事。
+脚注:[图示灵感来源于 Rob Vens 的一篇名为 https://oreil.ly/fQXkP[《全局复杂性,局部简单性》(Global Complexity, Local Simplicity)] 的文章。]
+
TIP: If your app is just a simple CRUD (create-read-update-delete) wrapper
around a database, then you don't need a domain model or a repository.
+如果你的应用程序只是一个围绕数据库的简单 CRUD(创建-读取-更新-删除)封装,那么你不需要领域模型或仓库。
((("domain model", "trade-offs as a diagram")))
((("Vens, Rob")))
@@ -973,9 +1206,11 @@ But the more complex the domain, the more an investment in freeing
yourself from infrastructure concerns will pay off in terms of the ease of
making changes.
+但领域越复杂,在摆脱基础设施相关问题上的投入就越有回报,因为这会显著提高更改的灵活性和方便性。
+
[[domain_model_tradeoffs_diagram]]
-.Domain model trade-offs as a diagram
+.Domain model trade-offs as a diagram(领域模型权衡关系图)
image::images/apwp_0206.png[]
@@ -988,26 +1223,39 @@ before we could run any tests. As it is, because our model is just plain
old Python objects, we can change a `set()` to being a new attribute, without
needing to think about the database until later.
+我们的示例代码的复杂性不足以完整地展现图表右侧的情况,但其中确实提供了一些提示。例如,想象一下,
+如果有一天我们决定将分配(allocations)从 `Batch` 对象移至 `OrderLine`,在使用 Django 这样的框架时,
+我们必须先定义并仔细考虑数据库迁移的问题,然后才能运行任何测试。而按照我们的方式,因为我们的模型只是一些普通的 _Python_ 对象,
+所以我们可以简单地将一个 `set()` 改为新的属性,而不需要在初期考虑数据库问题。
+
[role="nobreakinside"]
-.Repository Pattern Recap
+.Repository Pattern Recap(仓储模式回顾)
*****************************************************************
-Apply dependency inversion to your ORM::
+Apply dependency inversion to your ORM(对你的 ORM 应用依赖倒置原则)::
Our domain model should be free of infrastructure concerns,
so your ORM should import your model, and not the other way
around.
((("Repository pattern", "recap of important points")))
+我们的领域模型应当与基础设施无关,因此你的 ORM 应该导入模型,而不是模型导入 ORM。
-The Repository pattern is a simple abstraction around permanent storage::
+The Repository pattern is a simple abstraction around permanent storage(仓储模式是一种围绕永久存储的简单抽象。)::
The repository gives you the illusion of a collection of in-memory
objects. It makes it easy to create a `FakeRepository` for
testing and to swap fundamental details of your
infrastructure without disrupting your core application. See
<> for an example.
+仓储为你提供了一种内存对象集合的假象。它使你可以轻松创建一个用于测试的 `FakeRepository`,
+并在不干扰核心应用程序的情况下更换基础设施的关键细节。请参见 <> 获取示例。
*****************************************************************
You'll be wondering, how do we instantiate these repositories, fake or
real? What will our Flask app actually look like? You'll find out in the next
exciting installment, <>.
+你可能会想,我们如何实例化这些仓储,无论是假的还是实际的?我们的 Flask 应用实际上会是什么样子?
+答案将在下一章节 <> 的精彩内容中揭晓。
+
But first, a brief digression.
((("Repository pattern", startref="ix_Repo")))
+
+但首先,让我们稍作旁注。
diff --git a/chapter_03_abstractions.asciidoc b/chapter_03_abstractions.asciidoc
index 8f7af2a8..af7137d4 100644
--- a/chapter_03_abstractions.asciidoc
+++ b/chapter_03_abstractions.asciidoc
@@ -1,5 +1,6 @@
[[chapter_03_abstractions]]
== A Brief Interlude: On Coupling [.keep-together]#and Abstractions#
+小插曲:关于耦合与抽象
((("abstractions", id="ix_abs")))
Allow us a brief digression on the subject of abstractions, dear reader.
@@ -7,12 +8,17 @@ We've talked about _abstractions_ quite a lot. The Repository pattern is an
abstraction over permanent storage, for example. But what makes a good
abstraction? What do we want from abstractions? And how do they relate to testing?
+亲爱的读者,请允许我们对抽象这一主题做一个简短的旁注。我们已经多次提到 _抽象_。例如,仓储模式就是对永久存储的抽象。
+那么,什么才是一个良好的抽象?我们希望从抽象中获得什么?它们又是如何与测试相关的?
+
[TIP]
====
The code for this chapter is in the
chapter_03_abstractions branch https://oreil.ly/k6MmV[on GitHub]:
+本章的代码位于 GitHub 的 chapter_03_abstractions 分支 https://oreil.ly/k6MmV[链接如下]:
+
----
git clone https://github.com/cosmicpython/code.git
git checkout chapter_03_abstractions
@@ -30,6 +36,10 @@ we get to play with ideas freely, hammering things out and refactoring
aggressively. In a large-scale system, though, we become constrained by the
decisions made elsewhere in the system.
+本书的一个核心主题,隐藏在各种花哨的模式中,就是我们可以通过简单的抽象来隐藏杂乱的细节。当我们为乐趣编写代码,或者在进行编程练习(kata)时,
+脚注:[代码 kata 是一种小型、封闭的编程挑战,通常用于练习 TDD。请参考 https://web.archive.org/web/20221024055359/http://www.peterprovost.org/blog/2012/05/02/kata-the-only-way-to-learn-tdd/["Kata—The Only Way to Learn TDD"],作者:Peter Provost。]
+我们可以自由地尝试想法,大胆推敲并积极地进行重构。然而,在一个大型系统中,我们却会受到系统其他部分所做决定的限制。
+
((("coupling")))
((("cohesion, high, between coupled elements")))
When we're unable to change component A for fear of breaking component B, we say
@@ -38,6 +48,9 @@ a sign that our code is working together, each component supporting the others,
fitting in place like the gears of a watch. In jargon, we say this works when
there is high _cohesion_ between the coupled elements.
+当我们因为担心修改组件A会破坏组件B而无法改变组件A时,我们称这些组件变得 _耦合_ 了。在局部范围内,耦合是件好事:它表明我们的代码协同工作,
+每个组件都在支持其他组件,所有组件像手表的齿轮一样完美契合。用术语来说,这种情况在耦合元素之间具有高度 _内聚_ 时是有效的。
+
((("Ball of Mud pattern")))
((("coupling", "disadvantages of")))
Globally, coupling is a nuisance: it increases the risk and the cost of changing
@@ -47,15 +60,21 @@ if we're unable to prevent coupling between elements that have no cohesion, that
coupling increases superlinearly until we are no longer able to effectively
change our systems.
+从全局来看,耦合却是一种麻烦:它增加了修改代码的风险和成本,有时甚至会让我们觉得完全无法做出任何更改。
+这正是“泥球模式”(Ball of Mud pattern)的问题所在:随着应用程序的增长,如果我们无法阻止没有内聚性的元素之间的耦合,
+这种耦合会呈现超线性增长,直到我们再也无法有效地修改系统。
+
((("abstractions", "using to reduce coupling")))
((("coupling", "reducing by abstracting away details")))
We can reduce the degree of coupling within a system
(<>) by abstracting away the details
(<>).
+我们可以通过抽象掉细节(<>)来减少系统中的耦合程度(<>)。
+
[role="width-50"]
[[coupling_illustration1]]
-.Lots of coupling
+.Lots of coupling(大量耦合)
image::images/apwp_0301.png[]
[role="image-source"]
----
@@ -71,7 +90,7 @@ image::images/apwp_0301.png[]
[role="width-90"]
[[coupling_illustration2]]
-.Less coupling
+.Less coupling(较少耦合)
image::images/apwp_0302.png[]
[role="image-source"]
----
@@ -93,14 +112,21 @@ two; the number of arrows indicates lots of kinds of dependencies
between the two. If we need to change system B, there's a good chance that the
change will ripple through to system A.
+在这两张图中,我们都有一对子系统,其中一个依赖于另一个。在 <> 中,这两个系统之间有高度的耦合;
+箭头的数量表明两者之间存在多种依赖关系。如果我们需要更改系统B,很可能这种更改会波及到系统A。
+
In <>, though, we have reduced the degree of coupling by inserting a
new, simpler abstraction. Because it is simpler, system A has fewer
kinds of dependencies on the abstraction. The abstraction serves to
protect us from change by hiding away the complex details of whatever system B
does—we can change the arrows on the right without changing the ones on the left.
+然而,在 <> 中,我们通过引入一个新的、更简单的抽象来降低耦合程度。由于抽象更简单,系统A对该抽象的依赖种类就更少。
+这个抽象通过隐藏系统B的复杂细节,保护我们免受变更的影响——我们可以更改右边的箭头,而不需要更改左边的箭头。
+
[role="pagebreak-before less_space"]
=== Abstracting State Aids Testability
+抽象状态有助于提高可测试性
((("abstractions", "abstracting state to aid testability", id="ix_absstate")))
((("testing", "abstracting state to aid testability", id="ix_tstabs")))
@@ -109,10 +135,15 @@ does—we can change the arrows on the right without changing the ones on the le
Let's see an example. Imagine we want to write code for synchronizing two
file directories, which we'll call the _source_ and the _destination_:
+让我们来看一个例子。假设我们想编写用于同步两个文件目录的代码,我们将它们分别称为 _源目录_ 和 _目标目录_:
+
* If a file exists in the source but not in the destination, copy the file over.
+如果文件存在于源目录但不存在于目标目录中,则将文件复制过去。
* If a file exists in the source, but it has a different name than in the destination,
rename the destination file to match.
+如果文件存在于源目录中,但在目标目录中的名称不同,则将目标目录中的文件重命名以匹配源目录。
* If a file exists in the destination but not in the source, remove it.
+如果文件存在于目标目录但不存在于源目录中,则将其删除。
((("hashing a file")))
Our first and third requirements are simple enough: we can just compare two
@@ -121,8 +152,11 @@ we'll have to inspect the content of files. For this, we can use a hashing
function like MD5 or SHA-1. The code to generate a SHA-1 hash from a file is simple
enough:
+我们的第一个和第三个需求相对简单:我们只需比较两组路径列表即可。然而,第二个需求就比较棘手了。
+为了检测重命名,我们必须检查文件的内容。为此,我们可以使用诸如 MD5 或 SHA-1 之类的哈希函数。从文件生成一个 SHA-1 哈希的代码相对简单:
+
[[hash_file]]
-.Hashing a file (sync.py)
+.Hashing a file (sync.py)(对文件进行哈希处理)
====
[source,python]
----
@@ -143,12 +177,18 @@ def hash_file(path):
Now we need to write the bit that makes decisions about what to do—the business
logic, if you will.
+现在我们需要编写用于决定如何操作的部分——也就是所谓的业务逻辑。
+
When we have to tackle a problem from first principles, we usually try to write
a simple implementation and then refactor toward better design. We'll use
this approach throughout the book, because it's how we write code in the real
world: start with a solution to the smallest part of the problem, and then
iteratively make the solution richer and better designed.
+当我们从基本原理入手解决问题时,通常会尝试先编写一个简单的实现,然后逐步重构以实现更好的设计。
+我们将在整本书中使用这种方法,因为这也是我们在现实世界中编写代码的方式:从问题中最小的部分开始找到一个解决方案,
+然后通过迭代使解决方案更加完善且设计更优。
+
////
[SG] this may just be my lack of Python experience but it would have helped me to see
from pathlib import Path before this code snippet so that I might be able to guess
@@ -158,8 +198,10 @@ be too much to ask..
Our first hackish approach looks something like this:
+我们第一个有些粗糙的实现看起来像这样:
+
[[sync_first_cut]]
-.Basic sync algorithm (sync.py)
+.Basic sync algorithm (sync.py)(基础的同步算法)
====
[source,python]
[role="non-head"]
@@ -206,9 +248,11 @@ def sync(source, dest):
Fantastic! We have some code and it _looks_ OK, but before we run it on our
hard drive, maybe we should test it. How do we go about testing this sort of thing?
+太棒了!我们已经有了一些代码,而且它 _看起来_ 没问题,但在我们运行它操作硬盘之前,也许应该先测试一下。那么,我们该如何测试这类东西呢?
+
[[ugly_sync_tests]]
-.Some end-to-end tests (test_sync.py)
+.Some end-to-end tests (test_sync.py)(一些端到端测试)
====
[source,python]
[role="non-head"]
@@ -262,17 +306,26 @@ our domain logic, "figure out the difference between two directories," is tightl
coupled to the I/O code. We can't run our difference algorithm without calling
the `pathlib`, `shutil`, and `hashlib` modules.
+哇,这仅仅为了两个简单的用例就要进行这么多的设置!问题在于,我们的领域逻辑“找出两个目录之间的差异”与I/O代码耦合得太紧密了。
+我们无法在不调用 `pathlib`、`shutil` 和 `hashlib` 模块的情况下运行我们的差异算法。
+
And the trouble is, even with our current requirements, we haven't written
enough tests: the current implementation has several bugs (the
`shutil.move()` is wrong, for example). Getting decent coverage and revealing
these bugs means writing more tests, but if they're all as unwieldy as the preceding
ones, that's going to get real painful real quickly.
+问题在于,即使按照我们当前的需求,我们也没有编写足够的测试:当前的实现中存在几个错误(例如,`shutil.move()` 是错误的)。
+为了获得足够的覆盖率并揭示这些问题,我们需要编写更多的测试,但如果每个测试都像前面那样笨重,问题将很快变得非常棘手且痛苦。
+
On top of that, our code isn't very extensible. Imagine trying to implement
a `--dry-run` flag that gets our code to just print out what it's going to
do, rather than actually do it. Or what if we wanted to sync to a remote server,
or to cloud storage?
+除此之外,我们的代码扩展性也很差。想象一下,如果我们尝试实现一个 `--dry-run` 标志,让代码只是打印出它将要执行的操作,
+而不是实际执行操作,该怎么做?又或者,如果我们想要同步到远程服务器或云存储呢?
+
((("abstractions", "abstracting state to aid testability", startref="ix_absstate")))
((("testing", "abstracting state to aid testability", startref="ix_tstabs")))
((("state", "abstracting to aid testability", startref="ix_stateabs")))
@@ -284,25 +337,37 @@ We can definitely refactor these tests (some of the cleanup could go into pytest
fixtures, for example) but as long as we're doing filesystem operations, they're
going to stay slow and be hard to read and write.
+我们的高级代码与低级细节耦合在一起,这让生活变得困难。随着我们考虑的场景变得更加复杂,我们的测试将变得越发笨重。
+我们确实可以重构这些测试(例如,可以将一些清理操作放入 pytest 的 fixture 中),但只要我们继续执行文件系统操作,
+测试仍然会很慢,并且难以阅读和编写。
+
[role="pagebreak-before less_space"]
=== Choosing the Right Abstraction(s)
+选择合适的抽象
((("abstractions", "choosing right abstraction", id="ix_abscho")))
((("filesystems", "writing code to synchronize source and target directories", "choosing right abstraction", id="ix_filesyncabs")))
What could we do to rewrite our code to make it more testable?
+我们可以做些什么来重写代码以使其更具可测试性呢?
+
((("responsibilities of code")))
First, we need to think about what our code needs from the filesystem.
Reading through the code, we can see that three distinct things are happening.
We can think of these as three distinct _responsibilities_ that the code has:
+首先,我们需要思考代码对文件系统的需求。通过阅读代码,我们可以看到发生了三个不同的操作。我们可以将这些视为代码的三项不同 _职责_:
+
1. We interrogate the filesystem by using `os.walk` and determine hashes for a
series of paths. This is similar in both the source and the
destination cases.
+我们通过使用 `os.walk` 查询文件系统,并为一系列路径生成哈希值。这在源目录和目标目录这两种情况下是相似的。
2. We decide whether a file is new, renamed, or redundant.
+我们判断一个文件是新的、被重命名的,还是多余的。
3. We copy, move, or delete files to match the source.
+我们复制、移动或删除文件以使其与源目录匹配。
((("simplifying abstractions")))
@@ -311,10 +376,13 @@ responsibilities. That will let us hide the messy details so we can
focus on the interesting logic.footnote:[If you're used to thinking in terms of
interfaces, that's what we're trying to define here.]
+请记住,我们希望为这些职责中的每一项找到 _简化的抽象_。这将使我们能够隐藏繁琐的细节,从而专注于有趣的逻辑。脚注:[如果你习惯于从接口的角度思考,这正是我们想要在这里定义的内容。]
+
NOTE: In this chapter, we're refactoring some gnarly code into a more testable
structure by identifying the separate tasks that need to be done and giving
each task to a clearly defined actor, along similar lines to <>.
+在本章中,我们通过识别需要完成的独立任务,并将每个任务交给一个明确定义的参与者,来将一些复杂的代码重构为更具可测试性的结构,这与 <> 的方法类似。
((("dictionaries", "for filesystem operations")))
((("hashing a file", "dictionary of hashes to paths")))
@@ -324,17 +392,24 @@ build up a dictionary for the destination folder as well as the source, and
then we just compare two dicts?" That seems like a nice way to abstract the
current state of the filesystem:
+对于步骤 1 和 2,我们已经直观地开始使用一种抽象,即一个从哈希值到路径的字典。你可能已经在想:“为什么不同时为目标文件夹和源文件夹构建一个字典,
+然后简单地比较两个字典呢?”这似乎是一个很好地抽象文件系统当前状态的方法:
+
source_files = {'hash1': 'path1', 'hash2': 'path2'}
dest_files = {'hash1': 'path1', 'hash2': 'pathX'}
What about moving from step 2 to step 3? How can we abstract out the
actual move/copy/delete filesystem interaction?
+那么,从步骤 2 到步骤 3 呢?我们如何抽象化实际的移动/复制/删除文件系统交互呢?
+
((("coupling", "separating what you want to do from how to do it")))
We'll apply a trick here that we'll employ on a grand scale later in
the book. We're going to separate _what_ we want to do from _how_ to do it.
We're going to make our program output a list of commands that look like this:
+我们将在这里运用一个技巧,这个技巧后来将在本书中大规模应用。我们将把 _我们想做什么_ 与 _如何去做_ 分离开来。我们会让程序输出一个命令列表,看起来像这样:
+
("COPY", "sourcepath", "destpath"),
("MOVE", "old", "new"),
@@ -342,13 +417,17 @@ We're going to make our program output a list of commands that look like this:
Now we could write tests that just use two filesystem dicts as inputs, and we would
expect lists of tuples of strings representing actions as outputs.
+现在,我们可以编写测试,使用两个文件系统字典作为输入,并期望得到一个由字符串元组组成的列表作为输出,这些元组代表动作。
+
Instead of saying, "Given this actual filesystem, when I run my function,
check what actions have happened," we say, "Given this _abstraction_ of a filesystem,
what _abstraction_ of filesystem actions will happen?"
+我们不再说:“给定这个实际文件系统,当我运行我的函数时,检查发生了哪些操作。”而是说:“给定这个文件系统的 _抽象_,会发生哪些文件系统操作的 _抽象_?”
+
[[better_tests]]
-.Simplified inputs and outputs in our tests (test_sync.py)
+.Simplified inputs and outputs in our tests (test_sync.py)(在我们的测试中简化输入和输出)
====
[source,python]
[role="skip"]
@@ -369,6 +448,7 @@ what _abstraction_ of filesystem actions will happen?"
=== Implementing Our Chosen Abstractions
+实现我们选择的抽象
((("abstractions", "implementing chosen abstraction", id="ix_absimpl")))
((("abstractions", "choosing right abstraction", startref="ix_abscho")))
@@ -377,6 +457,8 @@ what _abstraction_ of filesystem actions will happen?"
That's all very well, but how do we _actually_ write those new
tests, and how do we change our implementation to make it all work?
+这都很好,但我们 _实际上_ 要如何编写这些新测试,并且如何更改我们的实现使其全部正常工作呢?
+
((("Functional Core, Imperative Shell (FCIS)")))
((("Bernhardt, Gary")))
((("testing", "after implementing chosen abstraction", id="ix_tstaftabs")))
@@ -388,17 +470,25 @@ by Gary Bernhardt as
https://oreil.ly/wnad4[Functional
Core, Imperative Shell], or FCIS).
+我们的目标是隔离系统中巧妙的部分,并能够彻底地测试它,而无需设置真实的文件系统。我们将创建一个“核心”代码,其不依赖于外部状态,
+然后观察当我们提供来自外部世界的输入时它如何响应(这种方法由 Gary Bernhardt 描述为 https://oreil.ly/wnad4[函数式核心,命令式外壳],简称 FCIS)。
+
((("I/O", "disentangling details from program logic")))
((("state", "splitting off from logic in the program")))
((("business logic", "separating from state in code")))
Let's start off by splitting the code to separate the stateful parts from
the logic.
+我们先从拆分代码开始,将有状态的部分与逻辑部分分离开来。
+
And our top-level function will contain almost no logic at all; it's just an
imperative series of steps: gather inputs, call our logic, apply outputs:
+
+我们的顶层函数几乎不包含任何逻辑;它只是一个命令式的步骤序列:收集输入、调用逻辑、应用输出:
+
[[three_parts]]
-.Split our code into three (sync.py)
+.Split our code into three (sync.py)(将我们的代码分成三部分)
====
[source,python]
----
@@ -422,16 +512,20 @@ def sync(source, dest):
====
<1> Here's the first function we factor out, `read_paths_and_hashes()`, which
isolates the I/O part of our application.
+这里是我们提取的第一个函数 `read_paths_and_hashes()`,它将应用程序的 I/O 部分隔离出来。
<2> Here is where we carve out the functional core, the business logic.
+这里是我们分离出函数式核心和业务逻辑的地方。
((("dictionaries", "dictionary of hashes to paths")))
The code to build up the dictionary of paths and hashes is now trivially easy
to write:
+现在,用于构建路径和哈希字典的代码变得极其简单:
+
[[read_paths_and_hashes]]
-.A function that just does I/O (sync.py)
+.A function that just does I/O (sync.py)(一个只执行I/O的函数)
====
[source,python]
----
@@ -449,8 +543,11 @@ which says, "Given these two sets of hashes and filenames, what should we
copy/move/delete?". It takes simple data structures and returns simple data
structures:
+`determine_actions()` 函数将是我们业务逻辑的核心,它描述了:“给定这两个哈希值和文件名的集合,
+我们应该执行哪些复制/移动/删除操作?” 它接受简单的数据结构并返回简单的数据结构:
+
[[determine_actions]]
-.A function that just does business logic (sync.py)
+.A function that just does business logic (sync.py)(一个只执行业务逻辑的函数)
====
[source,python]
----
@@ -474,9 +571,11 @@ def determine_actions(source_hashes, dest_hashes, source_folder, dest_folder):
Our tests now act directly on the `determine_actions()` function:
+我们的测试现在直接针对 `determine_actions()` 函数进行操作:
+
[[harry_tests]]
-.Nicer-looking tests (test_sync.py)
+.Nicer-looking tests (test_sync.py)(更易阅读的测试)
====
[source,python]
----
@@ -499,6 +598,8 @@ def test_when_a_file_has_been_renamed_in_the_source():
Because we've disentangled the logic of our program--the code for identifying
changes--from the low-level details of I/O, we can easily test the core of our code.
+因为我们已经将程序的逻辑(用于识别更改的代码)与底层的 I/O 细节解耦,我们可以轻松地测试代码的核心部分。
+
((("edge-to-edge testing", id="ix_edgetst")))
With this approach, we've switched from testing our main entrypoint function,
`sync()`, to testing a lower-level function, `determine_actions()`. You might
@@ -508,8 +609,13 @@ another option, which is to modify the `sync()` function so it can
be unit tested _and_ end-to-end tested; it's an approach Bob calls
_edge-to-edge testing_.
+通过这种方法,我们已从测试主要入口函数 `sync()` 转变为测试更底层的函数 `determine_actions()`。你可能会认为这样不错,
+因为现在 `sync()` 非常简单了。或者,你可能决定保留一些集成/验收测试来测试 `sync()`。但还有另一种选择,就是修改 `sync()` 函数,
+使其既能够进行单元测试 _又_ 能进行端到端测试,这是一种 Bob 称为 _边到边测试_ 的方法。
+
==== Testing Edge to Edge with Fakes and Dependency Injection
+使用伪造对象和依赖注入进行边到边测试
((("dependencies", "edge-to-edge testing with dependency injection", id="ix_depinj")))
((("testing", "after implementing chosen abstraction", "edge-to-edge testing with fakes and dependency injection", id="ix_tstaftabsedge")))
@@ -518,14 +624,18 @@ When we start writing a new system, we often focus on the core logic first,
driving it with direct unit tests. At some point, though, we want to test bigger
chunks of the system together.
+当我们开始编写一个新系统时,通常会先专注于核心逻辑,并通过直接的单元测试来驱动它。然而,在某个阶段,我们会希望将系统中的更大块内容一起进行测试。
+
((("faking", "faking I/O in edge-to-edge test")))
We _could_ return to our end-to-end tests, but those are still as tricky to
write and maintain as before. Instead, we often write tests that invoke a whole
system together but fake the I/O, sort of _edge to edge_:
+我们 _可以_ 回到端到端测试,但这些测试依然和以前一样难以编写和维护。相反,我们通常会编写一些测试,这些测试调用整个系统,但伪造了 I/O,有点像 _边到边_ 测试:
+
[[di_version]]
-.Explicit dependencies (sync.py)
+.Explicit dependencies (sync.py)(显式依赖)
====
[source,python]
[role="skip"]
@@ -552,24 +662,31 @@ def sync(source, dest, filesystem=FileSystem()): #<1>
====
<1> Our top-level function now exposes a new dependency, a `FileSystem`.
+我们的顶层函数现在暴露了一个新依赖项,即 `FileSystem`。
<2> We invoke `filesystem.read()` to produce our files dict.
+我们调用 `filesystem.read()` 来生成我们的文件字典。
<3> We invoke the ++FileSystem++'s `.copy()`, `.move()` and `.delete()` methods
to apply the changes we detect.
+我们调用 ++FileSystem++ 的 `.copy()`、`.move()` 和 `.delete()` 方法来应用我们检测到的更改。
TIP: Although we're using dependency injection, there is no need
to define an abstract base class or any kind of explicit interface. In this
book, we often show ABCs because we hope they help you understand what the
abstraction is, but they're not necessary. Python's dynamic nature means
we can always rely on duck typing.
+虽然我们使用了依赖注入,但没有必要定义抽象基类或任何形式的显式接口。在本书中,我们经常展示抽象基类(ABCs),
+因为我们希望它们能帮助你理解抽象的概念,但它们并不是必需的。 _Python_ 的动态特性意味着我们始终可以依赖于鸭子类型。
// IDEA [KP] Again, one could mention PEP544 protocols here. For some reason, I like them.
The real (default) implementation of our FileSystem abstraction does real I/O:
+我们 FileSystem 抽象的真实(默认)实现执行真实的 I/O:
+
[[real_filesystem_wrapper]]
-.The real dependency (sync.py)
+.The real dependency (sync.py)(真实依赖)
====
[source,python]
[role="skip"]
@@ -593,8 +710,10 @@ class FileSystem:
But the fake one is a wrapper around our chosen abstractions,
rather than doing real I/O:
+但伪对象是围绕我们选择的抽象的一个包装,而不是执行真实的 I/O:
+
[[fake_filesystem]]
-.Tests using DI
+.Tests using DI(使用依赖注入的测试)
====
[source,python]
[role="skip"]
@@ -620,6 +739,7 @@ class FakeFilesystem:
<1> We initialize our fake filesysem using the abstraction we chose to
represent filesystem state: dictionaries of hashes to paths.
+我们使用我们选择的抽象来表示文件系统状态来初始化我们的伪文件系统:即哈希到路径的字典。
<2> The action methods in our `FakeFileSystem` just appends a record to an list
of `.actions` so we can inspect it later. This means our test double is both
@@ -627,6 +747,7 @@ class FakeFilesystem:
((("test doubles")))
((("fake objects")))
((("spy objects")))
+我们 `FakeFileSystem` 中的操作方法只是将一个记录附加到 `.actions` 的列表中,以便我们稍后检查。这意味着我们的测试替身既是一个“伪对象”,也是一个“间谍”。
So now our tests can act on the real, top-level `sync()` entrypoint,
but they do so using the `FakeFilesystem()`. In terms of their
@@ -634,9 +755,12 @@ setup and assertions, they end up looking quite similar to the ones
we wrote when testing directly against the functional core `determine_actions()`
function:
+现在我们的测试可以作用于真实的顶层入口点 `sync()`,但它们使用的是 `FakeFilesystem()`。从设置和断言的角度来看,
+它们最终看起来与我们直接针对函数式核心 `determine_actions()` 函数编写的测试非常相似:
+
[[bob_tests]]
-.Tests using DI
+.Tests using DI(使用依赖注入的测试)
====
[source,python]
[role="skip"]
@@ -667,6 +791,9 @@ our stateful components explicit and pass them around.
David Heinemeier Hansson, the creator of Ruby on Rails, famously described this
as "test-induced design damage."
+这种方法的优点是我们的测试作用于生产代码中使用的完全相同的函数。缺点是我们必须使有状态的组件显式化并在代码中传递它们。
+Ruby on Rails 的创建者 David Heinemeier Hansson 曾著名地将此描述为“测试引发的设计损伤”。
+
((("edge-to-edge testing", startref="ix_edgetst")))
((("testing", "after implementing chosen abstraction", "edge-to-edge testing with fakes and dependency injection", startref="ix_tstaftabsedge")))
((("dependencies", "edge-to-edge testing with dependency injection", startref="ix_depinj")))
@@ -674,8 +801,11 @@ as "test-induced design damage."
In either case, we can now work on fixing all the bugs in our implementation;
enumerating tests for all the edge cases is now much easier.
+无论哪种情况,我们现在都可以专注于修复实现中的所有错误;为所有边界情况列举测试现在变得更加容易。
+
==== Why Not Just Patch It Out?
+为什么不直接用补丁来解决?
((("mock.patch method")))
((("mocking", "avoiding use of mock.patch")))
@@ -684,41 +814,57 @@ enumerating tests for all the edge cases is now much easier.
At this point you may be scratching your head and thinking,
"Why don't you just use `mock.patch` and save yourself the effort?"
+此时,你可能会挠头思考:“为什么不直接使用 `mock.patch` 来省事呢?”
+
We avoid using mocks in this book and in our production code too. We're not
going to enter into a Holy War, but our instinct is that mocking frameworks,
particularly monkeypatching, are a code smell.
+在本书以及我们的生产代码中,我们避免使用 Mock。我们不想引发一场“圣战”,但我们的直觉是,Mock 框架,
+尤其是猴子补丁(monkeypatching),是一种代码坏味道。
+
Instead, we like to clearly identify the responsibilities in our codebase, and to
separate those responsibilities into small, focused objects that are easy to
replace with a test double.
+相反,我们更倾向于清晰地识别代码库中的职责,并将这些职责分离成小而专注的对象,这些对象容易被测试替身替代。
+
NOTE: You can see an example in <>,
where we `mock.patch()` out an email-sending module, but eventually we
replace that with an explicit bit of dependency injection in
<>.
+你可以在 <> 中看到一个示例,我们使用 `mock.patch()` 替换了一个发送电子邮件的模块,但最终我们在 <> 中用依赖注入的明确实现替代了它。
We have three closely related reasons for our preference:
+我们对这种偏好的原因有三个密切相关的方面:
+
* Patching out the dependency you're using makes it possible to unit test the
code, but it does nothing to improve the design. Using `mock.patch` won't let your
code work with a `--dry-run` flag, nor will it help you run against an FTP
server. For that, you'll need to introduce abstractions.
+通过补丁替换掉你所使用的依赖,可以让代码进行单元测试,但对改进设计毫无帮助。
+使用 `mock.patch` 不会让你的代码支持一个 `--dry-run` 标志,也不会帮助你运行在一个 FTP 服务器上。要做到这些,你需要引入抽象。
* Tests that use mocks _tend_ to be more coupled to the implementation details
of the codebase. That's because mock tests verify the interactions between
things: did we call `shutil.copy` with the right arguments? This coupling between
code and test _tends_ to make tests more brittle, in our experience.
((("coupling", "in tests that use mocks")))
+使用 Mock 的测试 _往往_ 更加耦合于代码库的实现细节。这是因为 Mock 测试验证的是各部分之间的交互:我们是否以正确的参数调用了 `shutil.copy`?
+根据我们的经验,这种代码与测试之间的耦合 _往往_ 会使测试更脆弱。
* Overuse of mocks leads to complicated test suites that fail to explain the
code.
+过度使用 Mock 会导致测试套件变得复杂,并且无法很好地解释代码。
NOTE: Designing for testability really means designing for
extensibility. We trade off a little more complexity for a cleaner design
that admits novel use cases.
+为测试性而设计实际上意味着为可扩展性而设计。我们用稍微多一些的复杂性换取更简洁的设计,从而能够支持新的用例。
[role="nobreakinside less_space"]
-.Mocks Versus Fakes; Classic-Style Versus London-School TDD
+.Mocks Versus Fakes; Classic-Style Versus London-School TDD(模拟对象与伪造对象;经典风格与伦敦学派 TDD)
*******************************************************************************
((("test doubles", "mocks versus fakes")))
@@ -727,15 +873,20 @@ NOTE: Designing for testability really means designing for
Here's a short and somewhat simplistic definition of the difference between
mocks and fakes:
+这里有一个简短且稍显简单的关于 Mock 和 Fake 区别的定义:
+
* Mocks are used to verify _how_ something gets used; they have methods
like `assert_called_once_with()`. They're associated with London-school
TDD.
+Mocks 用于验证某件事情 _如何_ 被使用;它们有像 `assert_called_once_with()` 这样的方法。它们通常与伦敦学派的 TDD(测试驱动开发)相关联。
* Fakes are working implementations of the thing they're replacing, but
they're designed for use only in tests. They wouldn't work "in real life";
our in-memory repository is a good example. But you can use them to make assertions about
the end state of a system rather than the behaviors along the way, so
they're associated with classic-style TDD.
+Fakes 是被替代对象的工作实现,但它们仅用于测试中。它们在“现实生活”中无法正常工作;我们的内存中仓储就是一个很好的例子。
+但你可以用它们对系统的最终状态进行断言,而不是对过程中发生的行为进行断言,因此它们通常与经典风格的 TDD(测试驱动开发)相关联。
((("Fowler, Martin")))
((("stubbing, mocks and stubs")))
@@ -744,6 +895,9 @@ We're slightly conflating mocks with spies and fakes with stubs here, and you
can read the long, correct answer in Martin Fowler's classic essay on the subject
called https://oreil.ly/yYjBN["Mocks Aren't Stubs"].
+这里我们有些将 Mocks 与 Spies 以及 Fakes 与 Stubs 混为一谈了。你可以阅读 Martin Fowler 关于这一主题的
+经典文章 https://oreil.ly/yYjBN["Mocks Aren't Stubs"] 来了解更长、更准确的答案。
+
((("MagicMock objects")))
((("unittest.mock function")))
((("test doubles", "mocks versus stubs")))
@@ -752,6 +906,9 @@ It also probably doesn't help that the `MagicMock` objects provided by
But they're also often used as stubs or dummies. There, we promise we're done with
the test double terminology nitpicks now.
+`unittest.mock` 提供的 `MagicMock` 对象,严格来说,并不是 Mocks;如果非要定义的话,它们更像是 Spies。
+但它们也经常被用作 Stubs 或 Dummies。好了,我们保证现在已经结束了对测试替身术语的这些吹毛求疵。
+
//IDEA (hynek) you could mention Alex Gaynor's `pretend` which gives you
// stubs without mocks error-prone magic.
@@ -768,17 +925,28 @@ checks on the behavior of intermediary collaborators.footnote:[Which is not to
say that we think the London school people are wrong. Some insanely smart
people work that way. It's just not what we're used to.]
+那么伦敦学派和经典风格的 TDD 之间呢?你可以在我们刚提到的 Martin Fowler 的文章中,
+以及 https://oreil.ly/H2im_[Software Engineering Stack Exchange 网站] 上,阅读更多关于这两种方法的信息。但在本书中,
+我们相当坚定地站在经典派这一边。我们喜欢将测试围绕状态进行设计,无论是在设置还是断言中,并且我们喜欢在尽可能高的抽象层次上工作,
+而不是检查中间协作对象的行为。注释:[这并不是说我们认为伦敦派的人是错误的。一些非常聪明的人是以这种方式工作的。这只是我们不太习惯的方式而已。]
+
Read more on this in <>.
+
+在 <> 中阅读更多相关内容。
*******************************************************************************
We view TDD as a design practice first and a testing practice second. The tests
act as a record of our design choices and serve to explain the system to us
when we return to the code after a long absence.
+我们将 TDD 首先视为一种设计实践,其次才是测试实践。这些测试记录了我们的设计选择,并在我们长时间后重新回到代码时,帮助我们理解系统。
+
((("mocking", "overmocked tests, pitfalls of")))
Tests that use too many mocks get overwhelmed with setup code that hides the
story we care about.
+使用过多 Mock 的测试会被大量的设置代码淹没,从而掩盖了我们真正关心的核心内容。
+
(((""Test-Driven Development: That's Not What We Meant"", primary-sortas="Test-Driven Development")))
((("Freeman, Steve")))
((("PyCon talk on Mocking Pitfalls")))
@@ -789,6 +957,9 @@ You should also check out this PyCon talk, https://oreil.ly/s3e05["Mocking and P
by our esteemed tech reviewer, Ed Jung, which also addresses mocking and its
alternatives.
+Steve Freeman 在他的演讲 https://youtu.be/yuEbZYKgZas?si=ZpBoivlDH13XTG9p&t=294["Test-Driven Development: That's Not What We Meant"] 中展示了一个关于过度 Mock 的精彩示例。
+你还可以看看我们敬爱的技术审稿人 Ed Jung 在 PyCon 上的演讲 https://oreil.ly/s3e05["Mocking and Patching Pitfalls"],其中同样讨论了 Mock 及其替代方案。
+
And while we're recommending talks, check out the wonderful Brandon Rhodes
in https://oreil.ly/oiXJM["Hoisting Your I/O"]. It's not actually about mocks,
but is instead about the general issue of decoupling business logic from I/O,
@@ -796,6 +967,9 @@ in which he uses a wonderfully simple illustrative example.
((("hoisting I/O")))
((("Rhodes, Brandon")))
+同时,既然我们在推荐演讲,也强烈推荐你观看 Brandon Rhodes 的精彩演讲: https://oreil.ly/oiXJM["Hoisting Your I/O"]。
+这其实并非关于 Mock,而是关于将业务逻辑与 I/O 解耦的一般性问题,他在演讲中使用了一个极其简单的示例来进行说明。
+
TIP: In this chapter, we've spent a lot of time replacing end-to-end tests with
unit tests. That doesn't mean we think you should never use E2E tests!
@@ -805,9 +979,12 @@ TIP: In this chapter, we've spent a lot of time replacing end-to-end tests with
for more details.
((("unit testing", "unit tests replacing end-to-end tests")))
((("end-to-end tests", "replacement with unit tests")))
+在本章中,我们花了很多时间用单元测试替换端到端(E2E)测试。但这并不意味着我们认为你永远不应该使用 E2E 测试!
+我们在本书中展示的技术旨在帮助你构建一个合理的测试金字塔,其中尽可能多地包含单元测试,并仅使用最少数量的 E2E 测试以让你感到自信。
+阅读 <> 获取更多详细信息。
-.So Which Do We Use In This Book? Functional or Object-Oriented Composition?
+.So Which Do We Use In This Book? Functional or Object-Oriented Composition?(那么在本书中我们使用哪种方法?函数式还是面向对象的组合?)
******************************************************************************
((("object-oriented composition")))
Both. Our domain model is entirely free of dependencies and side effects,
@@ -816,11 +993,18 @@ so that's our functional core. The service layer that we build around it
and we use dependency injection to provide those services with stateful
components, so we can still unit test them.
+两者兼用。我们的领域模型完全没有依赖和副作用,这就是我们的函数式核心。
+在其周围构建的服务层(见 <>)允许我们以边到边的方式驱动系统,
+并通过依赖注入为这些服务提供有状态的组件,因此我们仍然可以对它们进行单元测试。
+
See <> for more exploration of making our
dependency injection more explicit and centralized.
+
+请参阅 <>,了解更多关于如何使我们的依赖注入更加显式和集中的探索。
******************************************************************************
=== Wrap-Up
+总结
((("abstractions", "implementing chosen abstraction", startref="ix_absimpl")))
((("abstractions", "simplifying interface between business logic and I/O")))
@@ -834,24 +1018,37 @@ systems easier to test and maintain by simplifying the interface between our
business logic and messy I/O. Finding the right abstraction is tricky, but here are
a few heuristics and questions to ask yourself:
+我们会在本书中一再看到这个理念:通过简化业务逻辑和混乱的 I/O 之间的接口,我们可以让系统更容易测试和维护。
+找到合适的抽象是一个难点,但以下是一些启发和可以问自己的问题:
+
* Can I choose a familiar Python data structure to represent the state of the
messy system and then try to imagine a single function that can return that
state?
+我能选择一个熟悉的 _Python_ 数据结构来表示这个混乱系统的状态,然后尝试设想一个可以返回该状态的单一函数吗?
* Separate the _what_ from the _how_:
can I use a data structure or DSL to represent the external effects I want to happen,
independently of _how_ I plan to make them happen?
+将 _what_ 与 _how_ 分离:
+我能否使用一个数据结构或领域专用语言(DSL)来表示我想要发生的外部效果,而与我计划如何实现它们的方式无关?
* Where can I draw a line between my systems,
where can I carve out a https://oreil.ly/zNUGG[seam]
to stick that abstraction in?
((("seams")))
+我可以在哪些地方为我的系统划分界限,
+我可以在哪里开辟一个 https://oreil.ly/zNUGG[接口] 来插入那个抽象?
* What is a sensible way of dividing things into components with different responsibilities?
What implicit concepts can I make explicit?
+将事物划分为具有不同职责的组件,什么样的方式是合理的?
+我可以将哪些隐含的概念显式化?
* What are the dependencies, and what is the core business logic?
+哪些是依赖项,哪些是核心业务逻辑?
((("abstractions", startref="ix_abs")))
Practice makes less imperfect! And now back to our regular programming...
+
+熟能生巧!现在让我们回到正常的编程内容中……
diff --git a/chapter_04_service_layer.asciidoc b/chapter_04_service_layer.asciidoc
index 83bf6a57..bc8987a7 100644
--- a/chapter_04_service_layer.asciidoc
+++ b/chapter_04_service_layer.asciidoc
@@ -1,13 +1,17 @@
[[chapter_04_service_layer]]
== Our First Use Case: [.keep-together]#Flask API and Service Layer#
+我们的第一个用例:Flask API 和服务层
((("service layer", id="ix_serlay")))
((("Flask framework", "Flask API and service layer", id="ix_Flskapp")))
Back to our allocations project! <> shows the point we reached at the end of <>, which covered the Repository pattern.
+回到我们的分配项目!<> 展示了我们在 <> 结束时所达到的阶段,
+该章节讲述了仓库模式(Repository pattern)。
+
[role="width-75"]
[[maps_service_layer_before]]
-.Before: we drive our app by talking to repositories and the domain model
+.Before: we drive our app by talking to repositories and the domain model(之前:我们通过与仓储和领域模型交互来驱动我们的应用程序)
image::images/apwp_0401.png[]
@@ -16,17 +20,25 @@ business logic, and interfacing code, and we introduce the _Service Layer_
pattern to take care of orchestrating our workflows and defining the use
cases of our system.
+在本章中,我们将讨论编排逻辑、业务逻辑和接口代码之间的区别,并引入 _服务层_ 模式来负责编排我们的工作流程以及定义系统的用例。
+
We'll also discuss testing: by combining the Service Layer with our repository
abstraction over the database, we're able to write fast tests, not just of
our domain model but of the entire workflow for a use case.
+我们还将讨论测试:通过将服务层与数据库的仓库抽象结合起来,我们不仅可以为领域模型编写快速测试,还可以为用例的整个工作流程编写快速测试。
+
<> shows what we're aiming for: we're going to
add a Flask API that will talk to the service layer, which will serve as the
entrypoint to our domain model. Because our service layer depends on the
`AbstractRepository`, we can unit test it by using `FakeRepository` but run our production code using `SqlAlchemyRepository`.
+<> 展示了我们的目标:我们将添加一个与服务层对接的 Flask API,它将作为进入领域模型的入口。
+由于服务层依赖于 `AbstractRepository`,我们可以通过使用 `FakeRepository` 对其进行单元测试,
+但在生产代码中使用 `SqlAlchemyRepository` 来运行。
+
[[maps_service_layer_after]]
-.The service layer will become the main way into our app
+.The service layer will become the main way into our app(服务层将成为进入我们应用程序的主要方式)
image::images/apwp_0402.png[]
// IDEA more detailed legend
@@ -35,11 +47,16 @@ In our diagrams, we are using the convention that new components
are highlighted with bold text/lines (and yellow/orange color, if you're
reading a digital version).
+在我们的图表中,我们采用的约定是用加粗的文本/线条(如果你阅读的是数字版,还会使用黄色/橙色的颜色)来突出新的组件。
+
[TIP]
====
The code for this chapter is in the
chapter_04_service_layer branch https://oreil.ly/TBRuy[on GitHub]:
+本章的代码位于
+chapter_04_service_layer 分支,链接:https://oreil.ly/TBRuy[在 GitHub 上]:
+
----
git clone https://github.com/cosmicpython/code.git
cd code
@@ -51,6 +68,7 @@ git checkout chapter_02_repository
=== Connecting Our Application to the Real World
+将我们的应用程序连接到现实世界
((("service layer", "connecting our application to real world")))
((("Flask framework", "Flask API and service layer", "connecting the app to real world")))
@@ -59,27 +77,39 @@ in front of the users to start gathering feedback. We have the core
of our domain model and the domain service we need to allocate orders,
and we have the repository interface for permanent storage.
+像任何优秀的敏捷团队一样,我们正在努力推出一个最小可行产品(MVP),并将其呈现在用户面前以开始收集反馈。
+我们已经拥有了分配订单所需的领域模型核心和领域服务,并且还有用于持久存储的仓库接口。
+
Let's plug all the moving parts together as quickly as we
can and then refactor toward a cleaner architecture. Here's our
plan:
+让我们尽快将所有活动部件连接起来,然后再通过重构实现更清晰的架构。以下是我们的计划:
+
1. Use Flask to put an API endpoint in front of our `allocate` domain service.
Wire up the database session and our repository. Test it with
an end-to-end test and some quick-and-dirty SQL to prepare test
data.
((("Flask framework", "putting API endpoint in front of allocate domain service")))
+使用 Flask 在我们的 `allocate` 领域服务前添加一个 API 端点。
+连接数据库会话和我们的仓库。通过端到端测试以及一些快速但简陋的 SQL 来准备测试数据进行测试。
2. Refactor out a service layer that can serve as an abstraction to
capture the use case and that will sit between Flask and our domain model.
Build some service-layer tests and show how they can use
`FakeRepository`.
+重构出一个服务层,作为抽象捕获用例,并位于 Flask 和我们的领域模型之间。
+编写一些服务层的测试,并展示如何使用 `FakeRepository` 来进行测试。
3. Experiment with different types of parameters for our service layer
functions; show that using primitive data types allows the service layer's
clients (our tests and our Flask API) to be decoupled from the model layer.
+尝试为我们的服务层函数使用不同类型的参数;
+展示使用原始数据类型如何使服务层的客户端(我们的测试和 Flask API)与模型层解耦。
=== A First End-to-End Test
+第一个端到端测试
((("APIs", "end-to-end test of allocate API")))
((("end-to-end tests", "of allocate API")))
@@ -90,14 +120,22 @@ an integration test versus a unit test. Different projects need different
combinations of tests, and we've seen perfectly successful projects just split
things into "fast tests" and "slow tests."
+没有人愿意陷入一场关于端到端(E2E)测试、功能测试、验收测试、集成测试与单元测试之间定义的漫长术语争论。不同的项目需要不同组合的测试,
+我们也见过一些非常成功的项目,仅仅将测试分为“快速测试”和“慢速测试”。
+
For now, we want to write one or maybe two tests that are going to exercise
a "real" API endpoint (using HTTP) and talk to a real database. Let's call
them _end-to-end tests_ because it's one of the most self-explanatory names.
+目前,我们希望编写一到两个测试,这些测试将用于运行一个“真实”的 API 端点(使用 HTTP)并与真实的数据库进行交互。
+我们将其称为 _端到端测试_,因为这是最直观易懂的名称之一。
+
The following shows a first cut:
+以下是初步的实现:
+
[[first_api_test]]
-.A first API test (test_api.py)
+.A first API test (test_api.py)(第一个 API 测试)
====
[source,python]
[role="non-head"]
@@ -129,12 +167,16 @@ def test_api_returns_allocation(add_stock):
generate randomized characters by using the `uuid` module. Because
we're running against an actual database now, this is one way to prevent
various tests and runs from interfering with each other.
+`random_sku()`、`random_batchref()` 等是一些辅助函数,它们使用 `uuid` 模块生成随机字符。
+因为我们现在正在运行实际的数据库,这是防止不同测试和运行相互干扰的一种方法。
<2> `add_stock` is a helper fixture that just hides away the details of
manually inserting rows into the database using SQL. We'll show a nicer
way of doing this later in the chapter.
+`add_stock` 是一个辅助的 fixture,它只是隐藏了通过 SQL 手动向数据库插入行的细节。稍后在本章中,我们会展示一种更优雅的实现方式。
<3> _config.py_ is a module in which we keep configuration information.
+_config.py_ 是一个用于存放配置信息的模块。
((("Flask framework", "Flask API and service layer", "first API end-to-end test", startref="ix_Flskappe2e")))
Everyone solves these problems in different ways, but you're going to need some
@@ -142,16 +184,22 @@ way of spinning up Flask, possibly in a container, and of talking to a
Postgres database. If you want to see how we did it, check out
<>.
+每个人都会以不同的方式解决这些问题,但你需要某种方法来启动 Flask(可能是在一个容器中),并与一个 Postgres 数据库进行交互。
+如果你想了解我们是如何实现的,可以参考 <>。
+
=== The Straightforward Implementation
+直接的实现方案
((("service layer", "first cut of Flask app", id="ix_serlay1Flapp")))
((("Flask framework", "Flask API and service layer", "first cut of the app", id="ix_Flskapp1st")))
Implementing things in the most obvious way, you might get something like this:
+按照最直接的方式实现,你可能会得到如下代码:
+
[[first_cut_flask_app]]
-.First cut of Flask app (flask_app.py)
+.First cut of Flask app (flask_app.py)(Flask 应用的初步实现)
====
[source,python]
[role="non-head"]
@@ -188,6 +236,8 @@ def allocate_endpoint():
So far, so good. No need for too much more of your "architecture astronaut"
nonsense, Bob and Harry, you may be thinking.
+到目前为止,一切都很好。你可能会想,不需要太多你们这些“架构宇航员”的无谓废话,Bob 和 Harry。
+
((("databases", "testing allocations persisted to database")))
But hang on a minute--there's no commit. We're not actually saving our
allocation to the database. Now we need a second test, either one that will
@@ -195,8 +245,11 @@ inspect the database state after (not very black-boxy), or maybe one that
checks that we can't allocate a second line if a first should have already
depleted the batch:
+但且慢——我们还没有提交。实际上,我们还没有将我们的分配保存到数据库中。现在我们需要第二个测试,
+要么检查操作后的数据库状态(这不太符合黑盒测试的特点),要么可能测试一下,如果一个批次已经被耗尽,是否无法分配第二行:
+
[[second_api_test]]
-.Test allocations are persisted (test_api.py)
+.Test allocations are persisted (test_api.py)(测试分配是否被持久化)
====
[source,python]
[role="non-head"]
@@ -229,25 +282,35 @@ def test_allocations_are_persisted(add_stock):
((("service layer", "first cut of Flask app", startref="ix_serlay1Flapp")))
Not quite so lovely, but that will force us to add the commit.
+虽然不太优雅,但这将迫使我们添加提交操作。
+
=== Error Conditions That Require Database Checks
+需要通过数据库检查的错误情况
((("service layer", "error conditions requiring database checks in Flask app")))
((("Flask framework", "Flask API and service layer", "error conditions requiring database checks")))
If we keep going like this, though, things are going to get uglier and uglier.
+不过,如果我们继续这样下去,事情会变得越来越丑陋。
+
Suppose we want to add a bit of error handling. What if the domain raises an
error, for a SKU that's out of stock? Or what about a SKU that doesn't even
exist? That's not something the domain even knows about, nor should it. It's
more of a sanity check that we should implement at the database layer, before
we even invoke the domain service.
+假设我们想添加一些错误处理。如果域层抛出一个错误,比如某个 SKU 超出库存怎么办?又或者某个 SKU 根本不存在呢?
+这些都不是域层应当知道的事情,也不需要知道。这更像是一种合理性检查,我们应该在调用域服务之前,在数据库层实现它。
+
Now we're looking at two more end-to-end tests:
+现在我们需要再实现两个端到端测试:
+
[[test_error_cases]]
-.Yet more tests at the E2E layer (test_api.py)
+.Yet more tests at the E2E layer (test_api.py)(在端到端(E2E)层上进行更多测试)
====
[source,python]
[role="non-head"]
@@ -277,15 +340,20 @@ def test_400_message_for_invalid_sku(): #<2>
====
<1> In the first test, we're trying to allocate more units than we have in stock.
+在第一个测试中,我们尝试分配超过库存数量的单位。
<2> In the second, the SKU just doesn't exist (because we never called `add_stock`),
so it's invalid as far as our app is concerned.
+在第二个测试中,SKU 根本不存在(因为我们从未调用过 `add_stock`),
+因此对我们的应用程序来说,这是无效的。
And sure, we could implement it in the Flask app too:
+当然,我们也可以在 Flask 应用中实现它:
+
[[flask_error_handling]]
-.Flask app starting to get crufty (flask_app.py)
+.Flask app starting to get crufty (flask_app.py)(Flask 应用开始变得臃肿)
====
[source,python]
[role="non-head"]
@@ -319,8 +387,12 @@ But our Flask app is starting to look a bit unwieldy. And our number of
E2E tests is starting to get out of control, and soon we'll end up with an
inverted test pyramid (or "ice-cream cone model," as Bob likes to call it).
+但是我们的 Flask 应用开始显得有点笨重了。而且我们的端到端(E2E)测试数量也开始失控,
+很快我们就会陷入测试金字塔倒置的情况(或者像 Bob 喜欢称呼的那样,是“冰淇淋蛋筒模型”)。
+
=== Introducing a Service Layer, and Using FakeRepository to Unit Test It
+引入服务层,并使用 FakeRepository 对其进行单元测试
((("service layer", "introducing and using FakeRepository to unit test it", id="ix_serlayintr")))
((("orchestration")))
@@ -333,16 +405,24 @@ web API endpoint (you'd need them if you were building a CLI, for example; see
<>), and they're not really things that need to be tested by
end-to-end tests.
+如果我们查看 Flask 应用正在做的事情,会发现其中相当一部分可以称为“**编排**”——从仓库中获取数据、根据数据库状态验证输入、处理错误以及在正常流程中提交。
+这些事情大多与是否有一个 Web API 端点无关(例如,如果你在构建一个 CLI,这些操作也是必需的;参见 <>),
+而且它们并不是真的需要通过端到端测试来进行验证的内容。
+
((("orchestration layer", see="service layer")))
((("use-case layer", see="service layer")))
It often makes sense to split out a service layer, sometimes called an
_orchestration layer_ or a _use-case layer_.
+通常,将服务层拆分出来是有意义的,它有时也被称为“_编排层_”或“_用例层_”。
+
((("faking", "FakeRepository")))
Do you remember the `FakeRepository` that we prepared in <>?
+你还记得我们在 <> 中准备的 `FakeRepository` 吗?
+
[[fake_repo]]
-.Our fake repository, an in-memory collection of batches (test_services.py)
+.Our fake repository, an in-memory collection of batches (test_services.py)(我们的伪造仓储,一个存储批次的内存集合)
====
[source,python]
----
@@ -367,9 +447,11 @@ class FakeRepository(repository.AbstractRepository):
Here's where it will come in useful; it lets us test our service layer with
nice, fast unit tests:
+这里就是它派上用场的地方了;它使我们能够通过简洁且快速的单元测试来测试我们的服务层:
+
[[first_services_tests]]
-.Unit testing with fakes at the service layer (test_services.py)
+.Unit testing with fakes at the service layer (test_services.py)(在服务层使用伪造对象进行单元测试)
====
[source,python]
[role="non-head"]
@@ -395,6 +477,7 @@ def test_error_for_invalid_sku():
<1> `FakeRepository` holds the `Batch` objects that will be used by our test.
+`FakeRepository` 保存了测试中将要使用的 `Batch` 对象。
<2> Our services module (_services.py_) will define an `allocate()`
service-layer function. It will sit between our `allocate_endpoint()`
@@ -402,15 +485,19 @@ def test_error_for_invalid_sku():
our domain model.footnote:[Service-layer services and domain services do have
confusingly similar names. We tackle this topic later in
<>.]
+我们的服务模块(_services.py_)将定义一个 `allocate()` 服务层函数。
+它位于 API 层的 `allocate_endpoint()` 函数与领域模型中 `allocate()` 领域服务函数之间。
+注释:[服务层的服务和领域服务确实有令人困惑的相似名字。我们将在 <> 中探讨这一主题。]
<3> We also need a `FakeSession` to fake out the database session, as shown in
the following code snippet.
((("faking", "FakeSession, using to unit test the service layer")))
((("testing", "fake database session at service layer")))
+我们还需要一个 `FakeSession` 来模拟数据库会话,如下面的代码片段所示。
[[fake_session]]
-.A fake database session (test_services.py)
+.A fake database session (test_services.py)(一个伪造数据库会话)
====
[source,python]
----
@@ -426,9 +513,12 @@ This fake session is only a temporary solution. We'll get rid of it and make
things even nicer soon, in <>. But in the meantime
the fake `.commit()` lets us migrate a third test from the E2E layer:
+这个假的 session 只是一个临时的解决方案。我们很快会在 <> 中将其移除,并使事情变得更加优雅。
+但与此同时,假的 `.commit()` 让我们能够从端到端(E2E)层迁移第三个测试:
+
[[second_services_test]]
-.A second test at the service layer (test_services.py)
+.A second test at the service layer (test_services.py)(服务层的第二个测试)
====
[source,python]
[role="non-head"]
@@ -446,6 +536,7 @@ def test_commits():
==== A Typical Service Function
+一个典型的服务函数
((("functions", "service layer")))
((("service layer", "typical service function")))
@@ -453,8 +544,10 @@ def test_commits():
((("Flask framework", "Flask API and service layer", "introducing service layer and fake repo to unit test it", startref="ix_Flskappserly")))
We'll write a service function that looks something like this:
+我们将编写一个类似如下的服务函数:
+
[[service_function]]
-.Basic allocation service (services.py)
+.Basic allocation service (services.py)(基础的分配服务)
====
[source,python]
[role="non-head"]
@@ -479,25 +572,36 @@ def allocate(line: OrderLine, repo: AbstractRepository, session) -> str:
Typical service-layer functions have similar steps:
+典型的服务层函数具有类似的步骤:
+
<1> We fetch some objects from the repository.
+我们从仓库中获取一些对象。
<2> We make some checks or assertions about the request against
the current state of the world.
+我们根据当前的系统状态对请求进行一些检查或断言。
<3> We call a domain service.
+我们调用一个领域服务。
<4> If all is well, we save/update any state we've changed.
+如果一切正常,我们会保存/更新我们更改的任何状态。
That last step is a little unsatisfactory at the moment, as our service
layer is tightly coupled to our database layer. We'll improve
that in <> with the Unit of Work pattern.
+最后一步目前有点不太令人满意,因为我们的服务层与数据库层紧密耦合。
+我们将在 <> 中使用工作单元(Unit of Work)模式对此进行改进。
+
[role="nobreakinside less_space"]
[[depend_on_abstractions]]
-.Depend on Abstractions
+.Depend on Abstractions(依赖抽象)
*******************************************************************************
Notice one more thing about our service-layer function:
+注意我们服务层函数的另一个特点:
+
[source,python]
[role="skip"]
----
@@ -511,6 +615,9 @@ and we've used the type hint to say that we depend on `AbstractRepository`.
This means it'll work both when the tests give it a `FakeRepository` and
when the Flask app gives it a `SqlAlchemyRepository`.
+它依赖于一个仓库(repository)。我们选择将这种依赖显式化,并使用类型提示来表明我们依赖于 `AbstractRepository`。
+这意味着无论测试传入的是 `FakeRepository`,还是 Flask 应用传入的是 `SqlAlchemyRepository`,它都能正常工作。
+
((("dependencies", "depending on abstractions")))
If you remember <>,
this is what we mean when we say we should "depend on abstractions." Our
@@ -520,10 +627,15 @@ storage also depend on that same abstraction. See
<> and
<>.
+如果你还记得 <>,这就是当我们说“应该依赖抽象”时的意思。我们的 _高层模块_ ——服务层,依赖于仓库(repository)的抽象。
+而具体的持久化存储实现的 _细节_ 也依赖于同样的抽象。请参见 <> 和 <>。
+
See also in <> a worked example of swapping out the
_details_ of which persistent storage system to use while leaving the
abstractions intact.
+另请参见 <> 中的一个示例,展示了在保持抽象不变的情况下更换所使用的持久化存储系统 _细节_ 的操作实例。
+
*******************************************************************************
@@ -532,9 +644,11 @@ abstractions intact.
But the essentials of the service layer are there, and our Flask
app now looks a lot cleaner:
+但是服务层的核心已经存在了,并且我们的 Flask 应用现在看起来干净了许多:
+
[[flask_app_using_service_layer]]
-.Flask app delegating to service layer (flask_app.py)
+.Flask app delegating to service layer (flask_app.py)(Flask 应用委托给服务层)
====
[source,python]
[role="non-head"]
@@ -557,23 +671,31 @@ def allocate_endpoint():
====
<1> We instantiate a database session and some repository objects.
+我们实例化一个数据库会话和一些仓库对象。
<2> We extract the user's commands from the web request and pass them
to the service layer.
+我们从网页请求中提取用户的命令并将其传递给服务层。
<3> We return some JSON responses with the appropriate status codes.
+我们返回一些带有适当状态代码的 JSON 响应。
The responsibilities of the Flask app are just standard web stuff: per-request
session management, parsing information out of POST parameters, response status
codes, and JSON. All the orchestration logic is in the use case/service layer,
and the domain logic stays in the domain.
+Flask 应用的职责只是标准的网络相关工作:每个请求的会话管理、从 POST 参数中解析信息、响应状态代码以及 JSON。
+所有的协调逻辑都放在用例/服务层中,而领域逻辑保留在领域内。
+
((("Flask framework", "Flask API and service layer", "end-to-end tests for happy and unhappy paths")))
((("service layer", "end-to-end test of allocate API, testing happy and unhappy paths")))
Finally, we can confidently strip down our E2E tests to just two, one for
the happy path and one for the unhappy path:
+最后,我们可以自信地将我们的端到端(E2E)测试精简为仅两个:一个用于验证正常路径,另一个用于验证异常路径:
+
[[fewer_e2e_tests]]
-.E2E tests only happy and unhappy paths (test_api.py)
+.E2E tests only happy and unhappy paths (test_api.py)(端到端测试仅覆盖正常路径和异常路径)
====
[source,python]
[role="non-head"]
@@ -615,27 +737,39 @@ We've successfully split our tests into two broad categories: tests about web
stuff, which we implement end to end; and tests about orchestration stuff, which
we can test against the service layer in memory.
+我们已经成功地将测试拆分为两大类:关于网络相关内容的测试,我们通过端到端(E2E)测试来实现;
+以及关于协调逻辑的测试,我们可以针对服务层在内存中进行测试。
+
[role="nobreakinside less_space"]
-.Exercise for the Reader
+.Exercise for the Reader(读者练习)
******************************************************************************
((("deallocate service, building (exerise)")))
Now that we have an allocate service, why not build out a service for
`deallocate`? We've added https://github.com/cosmicpython/code/tree/chapter_04_service_layer_exercise[an E2E test and a few stub service-layer tests] for
you to get started on GitHub.
+既然我们已经有了一个 `allocate` 服务,那么为什么不为 `deallocate` 构建一个服务呢?我们在 GitHub 上为你提供了一个 https://github.com/cosmicpython/code/tree/chapter_04_service_layer_exercise[E2E 测试和一些服务层的测试桩],
+可以帮助你开始动手实践。
+
If that's not enough, continue into the E2E tests and _flask_app.py_, and
refactor the Flask adapter to be more RESTful. Notice how doing so doesn't
require any change to our service layer or domain layer!
+如果这还不够,可以继续深入研究 E2E 测试和 _flask_app.py_,并重构 Flask 适配器以使其更符合 RESTful 风格。
+注意,这样做并不需要对我们的服务层或领域层进行任何更改!
+
TIP: If you decide you want to build a read-only endpoint for retrieving allocation
info, just do "the simplest thing that can possibly work," which is
`repo.get()` right in the Flask handler. We'll talk more about reads versus
writes in <>.
+如果你决定要构建一个用于检索分配信息的只读端点,只需做“可能有效的最简单的事情”,也就是直接在 Flask 处理器中使用 `repo.get()`。
+我们将在 <> 中进一步讨论读操作与写操作的区别。
******************************************************************************
[[why_is_everything_a_service]]
=== Why Is Everything Called a Service?
+为什么所有东西都被叫做服务?
((("services", "application service and domain service")))
((("service layer", "difference between domain service and")))
@@ -644,23 +778,35 @@ TIP: If you decide you want to build a read-only endpoint for retrieving allocat
Some of you are probably scratching your heads at this point trying to figure
out exactly what the difference is between a domain service and a service layer.
+此时你们中的一些人可能正在抓耳挠腮,试图弄清楚领域服务和服务层之间究竟有什么区别。
+
((("application services")))
We're sorry—we didn't choose the names, or we'd have much cooler and friendlier
ways to talk about this stuff.
+很抱歉——这些名称不是我们起的,否则我们会用更酷、更友好的方式来描述这些东西。
+
((("orchestration", "using application service")))
We're using two things called a _service_ in this chapter. The first is an
_application service_ (our service layer). Its job is to handle requests from the
outside world and to _orchestrate_ an operation. What we mean is that the
service layer _drives_ the application by following a bunch of simple steps:
+在本章中,我们提到了两种被称为 _服务_ 的东西。第一种是 _应用服务_(也就是我们的服务层)。它的职责是处理来自外部世界的请求并 _协调_ 操作。
+我们的意思是,服务层通过执行一系列简单的步骤来 _驱动_ 应用程序:
+
* Get some data from the database
+从数据库获取一些数据
* Update the domain model
+更新领域模型
* Persist any changes
+持久化任何更改
This is the kind of boring work that has to happen for every operation in your
system, and keeping it separate from business logic helps to keep things tidy.
+这是一种在系统中每个操作都必须完成的枯燥工作,将其与业务逻辑分离有助于保持代码整洁有序。
+
((("domain services")))
The second type of service is a _domain service_. This is the name for a piece of
logic that belongs in the domain model but doesn't sit naturally inside a
@@ -671,8 +817,14 @@ part of the model, but it doesn't seem right to have a persisted entity for
the job. Instead a stateless TaxCalculator class or a `calculate_tax` function
can do the job.
+第二种服务是 _领域服务(domain service)_。这是指一段属于领域模型但不适合放在有状态实体或值对象中的逻辑。
+例如,如果你正在构建一个购物车应用程序,你可能会选择将税收规则构建为领域服务。计算税收是一项独立于更新购物车的工作,
+它是模型中的重要组成部分,但为这项工作创建一个持久化的实体似乎并不合适。相反,
+一个无状态的 TaxCalculator 类或者 `calculate_tax` 函数就能完成这项工作。
+
=== Putting Things in Folders to See Where It All Belongs
+将内容放入文件夹中以确定它们的归属
((("directory structure, putting project into folders")))
((("projects", "organizing into folders")))
@@ -682,10 +834,14 @@ As our application gets bigger, we'll need to keep tidying our directory
structure. The layout of our project gives us useful hints about what kinds of
object we'll find in each file.
+随着我们的应用程序变得越来越大,我们需要不断整理目录结构。项目的布局为我们提供了关于每个文件中可能会找到哪些类型对象的有用提示。
+
Here's one way we could organize things:
+以下是一种我们可以组织内容的方式:
+
[[nested_folder_tree]]
-.Some subfolders
+.Some subfolders(一些子文件夹)
====
[source,text]
[role="skip"]
@@ -727,11 +883,16 @@ Here's one way we could organize things:
`Aggregate`, and you might add an __exceptions.py__ for domain-layer exceptions
and, as you'll see in <>, [.keep-together]#__commands.py__# and __events.py__.
((("domain model", "folder for")))
+让我们为领域模型创建一个文件夹。目前它只是一个文件,但对于更复杂的应用程序,你可能会为每个类创建一个文件;
+你可能会为 `Entity`、`ValueObject` 和 `Aggregate` 创建辅助父类的文件,你还可以添加一个 __exceptions.py__ 来处理领域层的异常,
+并且正如你会在 <> 中看到的,还可以添加 [.keep-together]#__commands.py__# 和 __events.py__。
<2> We'll distinguish the service layer. Currently that's just one file
called _services.py_ for our service-layer functions. You could
add service-layer exceptions here, and as you'll see in
<>, we'll add __unit_of_work.py__.
+我们将区分服务层。目前它只是一个名为 _services.py_ 的文件,用于保存我们的服务层函数。你可以在这里添加服务层的异常处理,
+并且正如你将在 <> 中看到的,我们还会添加 __unit_of_work.py__。
<3> _Adapters_ is a nod to the ports and adapters terminology. This will fill
up with any other abstractions around external I/O (e.g., a __redis_client.py__).
@@ -741,45 +902,58 @@ Here's one way we could organize things:
((("inward-facing adapters")))
((("secondary adapters")))
((("driven adapters")))
+_Adapters_ 的命名来源于端口和适配器的术语。这里将包含围绕外部 I/O 的其他抽象(例如,一个 __redis_client.py__)。
+严格来说,这些可以称为 _次要_ 适配器或者 _驱动_ 适配器,有时也称为 _面向内部_ 的适配器。
<4> Entrypoints are the places we drive our application from. In the
official ports and adapters terminology, these are adapters too, and are
referred to as _primary_, _driving_, or _outward-facing_ adapters.
((("entrypoints")))
+Entrypoints 是我们驱动应用程序的地方。在正式的端口和适配器术语中,这些也属于适配器,被称为 _主_、_驱动_ 或 _面向外部_ 的适配器。
((("ports", "putting in folder with adapters")))
What about ports? As you may remember, they are the abstract interfaces that the
adapters implement. We tend to keep them in the same file as the adapters that
implement them.
+那么端口(ports)呢?你可能还记得,端口是适配器实现的抽象接口。我们通常将它们与实现它们的适配器保存在同一个文件中。
+
=== Wrap-Up
+总结
((("service layer", "benefits of")))
((("Flask framework", "Flask API and service layer", "service layer benefits")))
Adding the service layer has really bought us quite a lot:
+引入服务层确实为我们带来了不少好处:
+
* Our Flask API endpoints become very thin and easy to write: their
only responsibility is doing "web stuff," such as parsing JSON
and producing the right HTTP codes for happy or unhappy cases.
+我们的 Flask API 端点变得非常简洁且易于编写:它们的唯一职责就是处理“网络相关的事情”,例如解析 JSON 以及为正常或异常情况生成合适的 HTTP 状态代码。
* We've defined a clear API for our domain, a set of use cases or
entrypoints that can be used by any adapter without needing to know anything
about our domain model classes--whether that's an API, a CLI (see
<>), or the tests! They're an adapter for our domain too.
+我们为领域定义了一个清晰的 API,即一组用例或入口点,任何适配器都可以使用它们,而无需了解我们的领域模型类的任何细节——无论是 API、CLI(参见 <>),还是测试!它们本质上也是我们领域的一个适配器。
* We can write tests in "high gear" by using the service layer, leaving us
free to refactor the domain model in any way we see fit. As long as
we can still deliver the same use cases, we can experiment with new
designs without needing to rewrite a load of tests.
+我们可以通过使用服务层以“高速模式”编写测试,这使我们能够自由地按照需要重构领域模型。只要我们仍然能够实现相同的用例,就可以尝试新的设计,而无需重写大量的测试。
* And our test pyramid is looking good--the bulk of our tests
are fast unit tests, with just the bare minimum of E2E and integration
tests.
+而且我们的测试金字塔看起来很不错——大部分测试是快速的单元测试,仅有少量必要的端到端(E2E)和集成测试。
==== The DIP in Action
+依赖倒置原则(DIP)的实践应用
((("dependencies", "abstract dependencies of service layer")))
((("service layer", "dependencies of")))
@@ -788,20 +962,26 @@ Adding the service layer has really bought us quite a lot:
dependencies of our service layer: the domain model
and `AbstractRepository` (the port, in ports and adapters terminology).
+<> 显示了我们服务层的依赖关系:领域模型和 `AbstractRepository`(在端口和适配器的术语中称为端口)。
+
((("dependencies", "abstract dependencies of service layer", "testing")))
((("service layer", "dependencies of", "testing")))
When we run the tests, <> shows
how we implement the abstract dependencies by using `FakeRepository` (the
adapter).
+当我们运行测试时,<> 展示了我们如何通过使用 `FakeRepository`(适配器)来实现抽象依赖。
+
((("service layer", "dependencies of", "real dependencies at runtime")))
((("dependencies", "real service layer dependencies at runtime")))
And when we actually run our app, we swap in the "real" dependency shown in
<>.
+当我们实际运行应用程序时,我们会替换为 <> 中所示的“真实”依赖。
+
[role="width-75"]
[[service_layer_diagram_abstract_dependencies]]
-.Abstract dependencies of the service layer
+.Abstract dependencies of the service layer(抽象服务层的依赖项)
image::images/apwp_0403.png[]
[role="image-source"]
----
@@ -821,7 +1001,7 @@ image::images/apwp_0403.png[]
[role="width-75"]
[[service_layer_diagram_test_dependencies]]
-.Tests provide an implementation of the abstract dependency
+.Tests provide an implementation of the abstract dependency(测试提供了对抽象依赖的实现)
image::images/apwp_0404.png[]
[role="image-source"]
----
@@ -850,7 +1030,7 @@ image::images/apwp_0404.png[]
[role="width-75"]
[[service_layer_diagram_runtime_dependencies]]
-.Dependencies at runtime
+.Dependencies at runtime(运行时的依赖)
image::images/apwp_0405.png[]
[role="image-source"]
----
@@ -890,41 +1070,53 @@ image::images/apwp_0405.png[]
Wonderful.
+妙啊。
+
((("service layer", "pros and cons or trade-offs")))
((("Flask framework", "Flask API and service layer", "service layer pros and cons")))
Let's pause for <>,
in which we consider the pros and cons of having a service layer at all.
+让我们暂停一下,进入 <>,在那里我们将探讨是否需要服务层的优缺点。
+
[[chapter_04_service_layer_tradeoffs]]
[options="header"]
-.Service layer: the trade-offs
+.Service layer: the trade-offs(Service层:权衡利弊)
|===
-|Pros|Cons
+|Pros(优点)|Cons(缺点)
a|
* We have a single place to capture all the use cases for our application.
+我们有一个统一的位置来收集应用程序的所有用例。
* We've placed our clever domain logic behind an API, which leaves us free to
refactor.
+我们将精妙的领域逻辑置于一个 API 的后面,这使我们可以自由地进行重构。
* We have cleanly separated "stuff that talks HTTP" from "stuff that talks
allocation."
+我们已将“处理 HTTP 的内容”与“处理分配的内容”清晰地分离开来。
* When combined with the Repository pattern and `FakeRepository`, we have
a nice way of writing tests at a higher level than the domain layer;
we can test more of our workflow without needing to use integration tests
(read on to <> for more elaboration on this).
+当与仓库模式(Repository pattern)和 `FakeRepository` 结合时,我们获得了一种在高于领域层级上编写测试的优雅方式;
+我们可以测试更多的工作流程,而无需使用集成测试(在 <> 中将对此进行更详细的阐述)。
a|
* If your app is _purely_ a web app, your controllers/view functions can be
the single place to capture all the use cases.
+如果你的应用程序 _纯粹_ 是一个 Web 应用,那么你的控制器/视图函数可以作为收集所有用例的唯一场所。
* It's yet another layer of abstraction.
+它是另一个抽象层。
* Putting too much logic into the service layer can lead to the _Anemic Domain_
antipattern. It's better to introduce this layer after you spot orchestration
logic creeping into your controllers.
((("domain model", "getting benefits of rich model")))
((("Anemic Domain antipattern")))
+将过多的逻辑放入服务层可能会导致 _贫血领域_ 的反模式。最好是在你发现协调逻辑开始侵入控制器时再引入这个层。
* You can get a lot of the benefits that come from having rich domain models
by simply pushing logic out of your controllers and down to the model layer,
@@ -932,17 +1124,24 @@ a|
controllers").
((("Flask framework", "Flask API and service layer", startref="ix_Flskapp")))
((("service layer", startref="ix_serlay")))
+通过简单地将逻辑从控制器中移到模型层,而无需在它们之间添加额外的层(也就是所谓的“胖模型,瘦控制器”),你可以获得许多使用丰富领域模型所带来的好处。
|===
But there are still some bits of awkwardness to tidy up:
+但仍有一些不太优雅的地方需要整理:
+
* The service layer is still tightly coupled to the domain, because
its API is expressed in terms of `OrderLine` objects. In
<>, we'll fix that and talk about
the way that the service layer enables more productive TDD.
+服务层仍然与领域紧密耦合,因为它的API是通过 `OrderLine` 对象来表达的。在<>中,
+我们会解决这个问题,并讨论服务层如何促进更高效的TDD。
* The service layer is tightly coupled to a `session` object. In <>,
we'll introduce one more pattern that works closely with the Repository and
Service Layer patterns, the Unit of Work pattern, and everything will be absolutely lovely.
You'll see!
+服务层与一个 `session` 对象紧密耦合。在<>中,我们将引入另一个与仓储模式和服务层模式密切配合的模式——
+工作单元(Unit of Work)模式,这将让一切变得非常美好。你会看到的!
diff --git a/chapter_05_high_gear_low_gear.asciidoc b/chapter_05_high_gear_low_gear.asciidoc
index 265f159c..d53a7f0b 100644
--- a/chapter_05_high_gear_low_gear.asciidoc
+++ b/chapter_05_high_gear_low_gear.asciidoc
@@ -1,5 +1,6 @@
[[chapter_05_high_gear_low_gear]]
== TDD in High Gear and Low Gear
+高速档与低速档中的测试驱动开发 (TDD)
((("test-driven development (TDD)", id="ix_TDD")))
We've introduced the service layer to capture some of the additional
@@ -8,42 +9,60 @@ clearly define our use cases and the workflow for each: what
we need to get from our repositories, what pre-checks and current state
validation we should do, and what we save at the end.
+我们引入了服务层来承担一些实际应用程序中所需的额外协调职责。服务层帮助我们清晰地定义用例以及每个用例的工作流程:我们需要从仓储中获取什么数据,
+我们应该进行哪些预检查和当前状态验证,以及最终需要保存什么内容。
+
((("test-driven development (TDD)", "unit tests operating at lower level, acting directly on model")))
But currently, many of our unit tests operate at a lower level, acting
directly on the model. In this chapter we'll discuss the trade-offs
involved in moving those tests up to the service-layer level, and
some more general testing guidelines.
+但目前,我们的许多单元测试运行在较低的层级,直接操作模型。在本章中,我们将讨论将这些测试上移到服务层级别时涉及的权衡,
+以及一些更为通用的测试指南。
+
-.Harry Says: Seeing a Test Pyramid in Action Was a Light-Bulb Moment
+.Harry Says: Seeing a Test Pyramid in Action Was a Light-Bulb Moment(Harry 说:看到测试金字塔的实际应用让我茅塞顿开)
*******************************************************************************
((("test-driven development (TDD)", "test pyramid, examining")))
Here are a few words from Harry directly:
+以下是 Harry 的几句话:
+
_I was initially skeptical of all Bob's architectural patterns, but seeing
an actual test pyramid made me a convert._
+_起初我对 Bob 的所有架构模式持怀疑态度,但看到一个实际的测试金字塔让我彻底信服了。_
+
_Once you implement domain modeling and the service layer, you really actually can
get to a stage where unit tests outnumber integration and end-to-end tests by
an order of magnitude. Having worked in places where the E2E test build would
take hours ("wait 'til tomorrow," essentially), I can't tell you what a
difference it makes to be able to run all your tests in minutes or seconds._
+_一旦你实现了领域建模和服务层,你真的可以达到这样一个阶段:单元测试的数量能够比集成测试和端到端测试多出一个数量级。曾经我在一些地方工作时,
+端到端测试的构建需要花费数小时(基本上是“等到明天吧”),我没法描述能够在几分钟甚至几秒内运行完所有测试带来的巨大改变。_
+
_Read on for some guidelines on how to decide what kinds of tests to write
and at which level. The high gear versus low gear way of thinking really changed
my testing life._
+
+_继续阅读,了解一些关于如何决定编写哪些类型的测试以及在哪个层级编写的指南。高速档与低速档的思维方式确实改变了我的测试工作方式。_
*******************************************************************************
=== How Is Our Test Pyramid Looking?
+我们的测试金字塔看起来如何?
((("service layer", "using, test pyramid and")))
((("test-driven development (TDD)", "test pyramid with service layer added")))
Let's see what this move to using a service layer, with its own service-layer tests,
does to our test pyramid:
+让我们来看看引入服务层以及为其编写服务层测试对我们的测试金字塔有何影响:
+
[[test_pyramid]]
-.Counting types of tests
+.Counting types of tests(统计测试类型)
====
[source,sh]
[role="skip"]
@@ -65,9 +84,12 @@ tests/e2e/test_api.py:2
Not bad! We have 15 unit tests, 8 integration tests, and just 2 end-to-end tests. That's
already a healthy-looking test pyramid.
+不错!我们有 15 个单元测试,8 个集成测试,以及仅仅 2 个端到端测试。这已经是一个非常健康的测试金字塔了。
+
=== Should Domain Layer Tests Move to the Service Layer?
+领域层测试是否应该移到服务层?
((("domain layer", "tests moving to service layer")))
((("service layer", "domain layer tests moving to")))
@@ -77,8 +99,11 @@ software against the service layer, we don't really need tests for the domain
model anymore. Instead, we could rewrite all of the domain-level tests from
<> in terms of the service layer:
+让我们看看再进一步会发生什么。由于我们可以针对服务层测试我们的软件,因此实际上我们不再需要领域模型的测试了。
+相反,我们可以根据服务层,重写所有来自<>的领域层级测试:
+
-.Rewriting a domain test at the service layer (tests/unit/test_services.py)
+.Rewriting a domain test at the service layer (tests/unit/test_services.py)(在服务层重写一个领域测试)
====
[source,python]
[role="skip"]
@@ -115,20 +140,30 @@ def test_prefers_warehouse_batches_to_shipments():
((("service layer", "domain layer tests moving to", "reasons for")))
Why would we want to do that?
+为什么我们会想要这么做呢?
+
Tests are supposed to help us change our system fearlessly, but often
we see teams writing too many tests against their domain model. This causes
problems when they come to change their codebase and find that they need to
update tens or even hundreds of unit tests.
+测试的目的是帮助我们无所畏惧地更改系统,但我们经常看到团队针对领域模型编写了过多的测试。这会在需要更改代码库时引发问题,
+因为他们可能发现需要更新几十甚至上百个单元测试。
+
This makes sense if you stop to think about the purpose of automated tests. We
use tests to enforce that a property of the system doesn't change while we're
working. We use tests to check that the API continues to return 200, that the
database session continues to commit, and that orders are still being allocated.
+如果你停下来思考一下自动化测试的目的,这就说得通了。我们使用测试是为了确保在我们工作时,系统的某些属性不会发生变化。
+我们使用测试来检查 API 是否仍然返回 200,数据库会话是否仍旧提交,以及订单是否仍被分配。
+
If we accidentally change one of those behaviors, our tests will break. The
flip side, though, is that if we want to change the design of our code, any
tests relying directly on that code will also fail.
+如果我们意外更改了这些行为之一,那么我们的测试就会失败。不过,反过来说,如果我们想更改代码的设计,任何直接依赖该代码的测试也会失败。
+
As we get further into the book, you'll see how the service layer forms an API
for our system that we can drive in multiple ways. Testing against this API
reduces the amount of code that we need to change when we refactor our domain
@@ -136,13 +171,19 @@ model. If we restrict ourselves to testing only against the service layer,
we won't have any tests that directly interact with "private" methods or
attributes on our model objects, which leaves us freer to refactor them.
+随着我们进一步阅读本书,你会看到服务层如何为我们的系统形成一个 API,这个 API 能以多种方式进行驱动。针对这个 API 进行测试可以
+减少在重构领域模型时需要更改的代码量。如果我们只限制自己测试服务层,那么就不会有任何测试直接与模型对象的“私有”方法或属性交互,
+这使得我们可以更自由地对它们进行重构。
+
TIP: Every line of code that we put in a test is like a blob of glue, holding
the system in a particular shape. The more low-level tests we have, the
harder it will be to change things.
+我们在测试中编写的每一行代码都像是一滴胶水,将系统固定成特定的形状。低层级测试越多,改变系统就会变得越困难。
[[kinds_of_tests]]
=== On Deciding What Kind of Tests to Write
+关于如何决定编写哪些类型的测试
((("domain model", "deciding whether to write tests against")))
((("coupling", "trade-off between design feedback and")))
@@ -152,8 +193,11 @@ wrong to write tests against the domain model?" To answer those questions, it's
important to understand the trade-off between coupling and design feedback (see
<>).
+你可能会问自己:“那我是否应该重写所有的单元测试呢?针对领域模型编写测试是不是错的?”要回答这些问题,
+理解耦合与设计反馈之间的取舍非常重要(参见<>)。
+
[[test_spectrum_diagram]]
-.The test spectrum
+.The test spectrum(测试光谱)
image::images/apwp_0501.png[]
[role="image-source"]
----
@@ -173,30 +217,47 @@ Extreme programming (XP) exhorts us to "listen to the code." When we're writing
tests, we might find that the code is hard to use or notice a code smell. This
is a trigger for us to refactor, and to reconsider our design.
+极限编程(XP)敦促我们“倾听代码的声音”。当我们编写测试时,可能会发现代码难以使用,或者察觉到代码有异味。
+这就是一个触发点,提醒我们进行重构并重新审视我们的设计。
+
We only get that feedback, though, when we're working closely with the target
code. A test for the HTTP API tells us nothing about the fine-grained design of
our objects, because it sits at a much higher level of abstraction.
+然而,只有当我们与目标代码密切合作时,才能获得这种反馈。针对 HTTP API 的测试无法告诉我们对象的细粒度设计情况,
+因为它处于更高的抽象层级。
+
On the other hand, we can rewrite our entire application and, so long as we
don't change the URLs or request formats, our HTTP tests will continue to pass.
This gives us confidence that large-scale changes, like changing the database schema,
haven't broken our code.
+另一方面,我们可以重写整个应用程序,只要不更改 URL 或请求格式,HTTP 测试仍然会通过。这让我们有信心进行大规模的更改,
+例如修改数据库模式,而不会破坏我们的代码。
+
At the other end of the spectrum, the tests we wrote in <> helped us to
flesh out our understanding of the objects we need. The tests guided us to a
design that makes sense and reads in the domain language. When our tests read
in the domain language, we feel comfortable that our code matches our intuition
about the problem we're trying to solve.
+在光谱的另一端,我们在<>中编写的测试帮助我们完善了对所需对象的理解。这些测试引导我们实现了一个合理的设计,
+并使用了领域语言。当我们的测试以领域语言编写时,我们会感到安心,因为代码与我们试图解决的问题直观认识是一致的。
+
Because the tests are written in the domain language, they act as living
documentation for our model. A new team member can read these tests to quickly
understand how the system works and how the core concepts interrelate.
+由于这些测试是用领域语言编写的,它们可以作为我们模型的动态文档。新团队成员可以通过阅读这些测试快速了解系统的工作原理以及核心概念之间的关系。
+
We often "sketch" new behaviors by writing tests at this level to see how the
code might look. When we want to improve the design of the code, though, we will need to replace
or delete these tests, because they are tightly coupled to a particular
[.keep-together]#implementation#.
+我们经常通过在这个层级编写测试来“勾勒”新行为,来试试看代码可能会是什么样子。然而,当我们想改进代码设计时,就需要替换或删除这些测试,
+因为它们与特定的 [.keep-together]#实现# 紧密耦合。
+
// IDEA: (EJ3) an example that is overmocked would be good here if you decide to
// add one. Ch12 already has one that could be expanded.
@@ -208,32 +269,44 @@ or delete these tests, because they are tightly coupled to a particular
=== High and Low Gear
+高速档与低速档
((("test-driven development (TDD)", "high and low gear")))
Most of the time, when we are adding a new feature or fixing a bug, we don't
need to make extensive changes to the domain model. In these cases, we prefer
to write tests against services because of the lower coupling and higher coverage.
+大多数情况下,当我们添加新功能或修复一个错误时,并不需要对领域模型进行大规模更改。在这些情况下,我们更倾向于针对服务编写测试,
+因为这样可以降低耦合且提高覆盖率。
+
((("service layer", "writing tests against")))
For example, when writing an `add_stock` function or a `cancel_order` feature,
we can work more quickly and with less coupling by writing tests against the
service layer.
+例如,在编写 `add_stock` 函数或 `cancel_order` 功能时,通过针对服务层编写测试,我们可以以更快的速度完成工作,并减少耦合。
+
((("domain model", "writing tests against")))
When starting a new project or when hitting a particularly gnarly problem,
we will drop back down to writing tests against the domain model so we
get better feedback and executable documentation of our intent.
+当启动一个新项目或遇到一个特别棘手的问题时,我们会退回到针对领域模型编写测试,以获得更好的反馈以及可执行的意图文档。
+
The metaphor we use is that of shifting gears. When starting a journey, the
bicycle needs to be in a low gear so that it can overcome inertia. Once we're off
and running, we can go faster and more efficiently by changing into a high gear;
but if we suddenly encounter a steep hill or are forced to slow down by a
hazard, we again drop down to a low gear until we can pick up speed again.
+我们使用的比喻是换挡。当开始一段旅程时,自行车需要处于低速档以克服惯性。一旦起步并行进,
+我们可以换到高速档以更快、更高效地行驶;但如果突然遇到陡坡或由于障碍被迫减速,我们会再次降到低速档,直到能够重新提速。
+
[[primitive_obsession]]
=== Fully Decoupling the Service-Layer Tests from the Domain
+将服务层测试与领域完全解耦
((("service layer", "fully decoupling from the domain", id="ix_serlaydec")))
((("domain layer", "fully decoupling service layer from", id="ix_domlaydec")))
@@ -242,13 +315,19 @@ We still have direct dependencies on the domain in our service-layer
tests, because we use domain objects to set up our test data and to invoke
our service-layer functions.
+我们的服务层测试中仍然直接依赖于领域模型,因为我们使用领域对象来设置测试数据并调用服务层函数。
+
To have a service layer that's fully decoupled from the domain, we need to
rewrite its API to work in terms of primitives.
+要让服务层与领域模型完全解耦,我们需要重写其 API,使其基于基础数据类型(primitives)工作。
+
Our service layer currently takes an `OrderLine` domain object:
+我们的服务层当前接收一个 `OrderLine` 领域对象:
+
[[service_domain]]
-.Before: allocate takes a domain object (service_layer/services.py)
+.Before: allocate takes a domain object (service_layer/services.py)(之前:`allocate` 接受一个领域对象)
====
[source,python]
[role="skip"]
@@ -259,8 +338,10 @@ def allocate(line: OrderLine, repo: AbstractRepository, session) -> str:
How would it look if its parameters were all primitive types?
+如果其参数全是基础数据类型,会是什么样子呢?
+
[[service_takes_primitives]]
-.After: allocate takes strings and ints (service_layer/services.py)
+.After: allocate takes strings and ints (service_layer/services.py)(之后:`allocate` 接受字符串和整数)
====
[source,python]
----
@@ -273,8 +354,10 @@ def allocate(
We rewrite the tests in those terms as well:
+我们也用这些基础数据类型重写测试:
+
[[tests_call_with_primitives]]
-.Tests now use primitives in function call (tests/unit/test_services.py)
+.Tests now use primitives in function call (tests/unit/test_services.py)(测试现在在函数调用中使用了原始类型)
====
[source,python]
[role="non-head"]
@@ -292,8 +375,12 @@ But our tests still depend on the domain, because we still manually instantiate
`Batch` objects. So, if one day we decide to massively refactor how our `Batch`
model works, we'll have to change a bunch of tests.
+但是我们的测试仍然依赖于领域模型,因为我们仍需手动实例化 `Batch` 对象。因此,如果有一天我们决定对 `Batch` 模型的工作方式进行大规模重构,
+就不得不修改许多测试。
+
==== Mitigation: Keep All Domain Dependencies in Fixture Functions
+缓解措施:将所有领域依赖集中在固定装置函数中
((("faking", "FakeRepository", "adding fixture function on")))
((("fixture functions, keeping all domain dependencies in")))
@@ -303,9 +390,11 @@ We could at least abstract that out to a helper function or a fixture
in our tests. Here's one way you could do that, adding a factory
function on `FakeRepository`:
+我们至少可以将其抽象为测试中的一个辅助函数或固定装置(fixture)。以下是实现这一点的一种方式,通过在 `FakeRepository` 上添加一个工厂函数:
+
[[services_factory_function]]
-.Factory functions for fixtures are one possibility (tests/unit/test_services.py)
+.Factory functions for fixtures are one possibility (tests/unit/test_services.py)(为固定装置编写工厂函数是一种可能性)
====
[source,python]
[role="skip"]
@@ -332,8 +421,11 @@ def test_returns_allocation():
At least that would move all of our tests' dependencies on the domain
into one place.
+至少这样可以将我们所有测试对领域的依赖集中到一个地方。
+
==== Adding a Missing Service
+添加一个缺失的服务
((("test-driven development (TDD)", "fully decoupling service layer from the domain", "adding missing service")))
We could go one step further, though. If we had a service to add stock,
@@ -341,9 +433,12 @@ we could use that and make our service-layer tests fully expressed
in terms of the service layer's official use cases, removing all dependencies
on the domain:
+不过,我们还可以更进一步。如果我们有一个用于添加库存的服务,就可以使用该服务,使我们的服务层测试完全基于服务层的官方用例,
+从而移除对领域模型的所有依赖:
+
[[test_add_batch]]
-.Test for new add_batch service (tests/unit/test_services.py)
+.Test for new add_batch service (tests/unit/test_services.py)(测试新的 `add_batch` 服务)
====
[source,python]
----
@@ -359,12 +454,15 @@ def test_add_batch():
TIP: In general, if you find yourself needing to do domain-layer stuff directly
in your service-layer tests, it may be an indication that your service
layer is incomplete.
+通常情况下,如果你发现在服务层测试中需要直接处理领域层的内容,这可能表明你的服务层还不够完善。
[role="pagebreak-before"]
And the implementation is just two lines:
+而实现代码只有两行:
+
[[add_batch_service]]
-.A new service for add_batch (service_layer/services.py)
+.A new service for add_batch (service_layer/services.py)(一个用于 `add_batch` 的新服务)
====
[source,python]
----
@@ -386,15 +484,19 @@ def allocate(
NOTE: Should you write a new service just because it would help remove
dependencies from your tests? Probably not. But in this case, we
almost definitely would need an `add_batch` service one day [.keep-together]#anyway#.
+你是否应该仅仅为了帮助移除测试中的依赖而编写一个新服务?可能不必如此。但在这种情况下,我们几乎可以确定有一天我们会
+需要一个 `add_batch` 服务 [.keep-together]#无论如何#。
((("services", "service layer tests only using services")))
That now allows us to rewrite _all_ of our service-layer tests purely
in terms of the services themselves, using only primitives, and without
any dependencies on the model:
+现在,这使得我们可以将 *所有* 服务层测试纯粹以服务本身为基础重写,只使用基础数据类型(primitives),而无需任何对模型的依赖:
+
[[services_tests_all_services]]
-.Services tests now use only services (tests/unit/test_services.py)
+.Services tests now use only services (tests/unit/test_services.py)(服务测试现在仅使用服务)
====
[source,python]
----
@@ -422,8 +524,11 @@ This is a really nice place to be in. Our service-layer tests depend on only
the service layer itself, leaving us completely free to refactor the model as
we see fit.
+这真是一个令人愉快的境地。我们的服务层测试仅依赖于服务层本身,使我们可以完全自由地按照需要重构模型。
+
[role="pagebreak-before less_space"]
=== Carrying the Improvement Through to the E2E Tests
+将改进扩展到端到端(E2E)测试
((("E2E tests", see="end-to-end tests")))
((("end-to-end tests", "decoupling of service layer from domain, carrying through to")))
@@ -434,12 +539,17 @@ tests from the model, adding an API endpoint to add a batch would remove
the need for the ugly `add_stock` fixture, and our E2E tests could be free
of those hardcoded SQL queries and the direct dependency on the database.
+就像添加 `add_batch` 帮助将我们的服务层测试与模型解耦一样,添加一个用于添加批次的 API 端点可以去除丑陋的 `add_stock` 测试夹具的需求,
+而我们的端到端(E2E)测试也可以摆脱那些硬编码的 SQL 查询以及对数据库的直接依赖。
+
Thanks to our service function, adding the endpoint is easy, with just a little
JSON wrangling and a single function call required:
+多亏了我们的服务函数,添加这个端点非常简单,只需处理一点点 JSON,并进行一次函数调用:
+
[[api_for_add_batch]]
-.API for adding a batch (entrypoints/flask_app.py)
+.API for adding a batch (entrypoints/flask_app.py)(用于添加批次的 API)
====
[source,python]
----
@@ -467,13 +577,18 @@ NOTE: Are you thinking to yourself, POST to _/add_batch_? That's not
if you'd like to make it all more RESTy, maybe a POST to _/batches_,
then knock yourself out! Because Flask is a thin adapter, it'll be
easy. See <>.
+你是否在心里想,POST 到 _/add_batch_?这不太符合 RESTful!你完全正确。我们在这里确实有点随意,
+但如果你想让它更符合 REST 的风格,或许可以考虑 POST 到 _/batches_,那就随你喜欢了!因为 Flask 是一个轻量级的适配器,
+这会很容易实现。参见 <>。
And our hardcoded SQL queries from _conftest.py_ get replaced with some
API calls, meaning the API tests have no dependencies other than the API,
which is also nice:
+我们在 _conftest.py_ 中的那些硬编码 SQL 查询被一些 API 调用取代了,这意味着 API 测试除了依赖 API 本身之外没有其他依赖,这也非常不错:
+
[[api_tests_with_no_sql]]
-.API tests can now add their own batches (tests/e2e/test_api.py)
+.API tests can now add their own batches (tests/e2e/test_api.py)(API 测试现在可以添加它们自己的批次)
====
[source,python]
----
@@ -507,24 +622,28 @@ def test_happy_path_returns_201_and_allocated_batch():
=== Wrap-Up
+总结
((("service layer", "benefits to test-driven development")))
((("test-driven development (TDD)", "benefits of service layer to")))
Once you have a service layer in place, you really can move the majority
of your test coverage to unit tests and develop a healthy test pyramid.
+一旦你建立了服务层,确实可以将大部分测试覆盖移到单元测试中,从而构建一个合理的测试金字塔。
+
[role="nobreakinside less_space"]
[[types_of_test_rules_of_thumb]]
-.Recap: Rules of Thumb for Different Types of Test
+.Recap: Rules of Thumb for Different Types of Test(回顾:针对不同类型测试的经验法则)
******************************************************************************
-Aim for one end-to-end test per feature::
+Aim for one end-to-end test per feature(每个功能目标实现一个端到端测试)::
This might be written against an HTTP API, for example. The objective
is to demonstrate that the feature works, and that all the moving parts
are glued together correctly.
((("end-to-end tests", "aiming for one test per feature")))
+例如,这可能是针对一个 HTTP API 编写的。目标是证明该功能可以正常工作,并且所有的组件都正确地结合在一起。
-Write the bulk of your tests against the service layer::
+Write the bulk of your tests against the service layer(将大部分测试编写在服务层上)::
These edge-to-edge tests offer a good trade-off between coverage,
runtime, and efficiency. Each test tends to cover one code path of a
feature and use fakes for I/O. This is the place to exhaustively
@@ -535,14 +654,19 @@ Write the bulk of your tests against the service layer::
can be useful. But see also <> and
<>.]
((("service layer", "writing bulk of tests against")))
+这些端到端的测试在覆盖范围、运行时间和效率之间提供了良好的权衡。每个测试通常覆盖一个功能的代码路径,并使用假对象(fakes)来处理 I/O。
+这是全面覆盖所有边界情况以及业务逻辑内部细节的最佳位置。脚注:[一个关于在更高层级编写测试的合理担忧是,对于更复杂的用例,
+这可能会导致组合爆炸的风险。在这种情况下,针对各个协作域对象的较低层次单元测试可能是有用的。
+但同时也可以参考 <> 和 <>。]
-Maintain a small core of tests written against your domain model::
+Maintain a small core of tests written against your domain model(维护一小部分针对领域模型编写的核心测试)::
These tests have highly focused coverage and are more brittle, but they have
the highest feedback. Don't be afraid to delete these tests if the
functionality is later covered by tests at the service layer.
((("domain model", "maintaining small core of tests written against")))
+这些测试具有非常集中的覆盖范围,但相对来说更脆弱,但它们提供了最高的反馈速度。如果这些功能后来被服务层的测试覆盖了,不要害怕删除这些测试。
-Error handling counts as a feature::
+Error handling counts as a feature(错误处理也算作一个功能。)::
Ideally, your application will be structured such that all errors that
bubble up to your entrypoints (e.g., Flask) are handled in the same way.
This means you need to test only the happy path for each feature, and to
@@ -550,17 +674,26 @@ Error handling counts as a feature::
unit tests, of course).
((("test-driven development (TDD)", startref="ix_TDD")))
((("error handling", "counting as a feature")))
+理想情况下,你的应用程序结构应确保所有冒泡到入口点(例如,Flask)的错误都以相同的方式处理。这意味着你只需为每个功能测试其正常路径,
+并专门保留一个端到端测试用于测试所有异常路径(当然,还需要许多单元测试来覆盖各种异常路径)。
******************************************************************************
A few
things will help along the way:
+以下几点会对你有所帮助:
+
* Express your service layer in terms of primitives rather than domain objects.
+用原语(primitives)而不是领域对象来表达你的服务层。
* In an ideal world, you'll have all the services you need to be able to test
entirely against the service layer, rather than hacking state via
repositories or the database. This pays off in your end-to-end tests as well.
((("test-driven development (TDD)", "types of tests, rules of thumb for")))
+在理想情况下,你应该拥有所有需要的服务,能够完全针对服务层进行测试,而不是通过仓储或数据库来操作状态。
+这在你的端到端测试中也会有所收益。
Onto the next chapter!
+
+进入下一章!
diff --git a/chapter_06_uow.asciidoc b/chapter_06_uow.asciidoc
index 24c9a2a2..0826e361 100644
--- a/chapter_06_uow.asciidoc
+++ b/chapter_06_uow.asciidoc
@@ -1,16 +1,22 @@
[[chapter_06_uow]]
== Unit of Work Pattern
+工作单元模式
((("Unit of Work pattern", id="ix_UoW")))
In this chapter we'll introduce the final piece of the puzzle that ties
together the Repository and Service Layer patterns: the _Unit of Work_ pattern.
+在本章中,我们将介绍拼接仓储模式和服务层模式的最后一块拼图:_工作单元_ 模式。
+
((("UoW", see="Unit of Work pattern")))
((("atomic operations")))
If the Repository pattern is our abstraction over the idea of persistent storage,
the Unit of Work (UoW) pattern is our abstraction over the idea of _atomic operations_. It
will allow us to finally and fully decouple our service layer from the data layer.
+如果说仓储模式是对持久化存储概念的抽象,那么工作单元(Unit of Work,UoW)模式就是对 _原子操作_ 概念的抽象。
+它将使我们最终完全将服务层与数据层解耦。
+
((("Unit of Work pattern", "without, API talking directly to three layers")))
((("APIs", "without Unit of Work pattern, talking directly to three layers")))
<> shows that, currently, a lot of communication occurs
@@ -18,11 +24,17 @@ across the layers of our infrastructure: the API talks directly to the database
layer to start a session, it talks to the repository layer to initialize
`SQLAlchemyRepository`, and it talks to the service layer to ask it to allocate.
+<> 展示了当前我们的基础设施各层之间存在大量通信:API 直接与数据库层交互以启动会话,
+与仓储层交互以初始化 `SQLAlchemyRepository`,并与服务层交互以请求进行分配。
+
[TIP]
====
The code for this chapter is in the
chapter_06_uow branch https://oreil.ly/MoWdZ[on [.keep-together]#GitHub#]:
+本章的代码位于
+chapter_06_uow 分支 https://oreil.ly/MoWdZ[在 [.keep-together]#GitHub#]:
+
----
git clone https://github.com/cosmicpython/code.git
cd code
@@ -34,7 +46,7 @@ git checkout chapter_04_service_layer
[role="width-75"]
[[before_uow_diagram]]
-.Without UoW: API talks directly to three layers
+.Without UoW: API talks directly to three layers(没有工作单元:API 直接与三层交互)
image::images/apwp_0601.png[]
((("databases", "Unit of Work pattern managing state for")))
@@ -45,16 +57,22 @@ collaborates with the UoW (we like to think of the UoW as being part of the
service layer), but neither the service function itself nor Flask now needs
to talk directly to the database.
+<> 展示了我们的目标状态。现在,Flask API 仅执行两件事:初始化一个工作单元,并调用一个服务。
+服务与工作单元协作(我们倾向于将工作单元视为服务层的一部分),但服务函数本身和 Flask 都不再需要直接与数据库交互。
+
((("context manager")))
And we'll do it all using a lovely piece of Python syntax, a context manager.
+我们将通过一段优雅的 _Python_ 语法——上下文管理器来实现这一切。
+
[role="width-75"]
[[after_uow_diagram]]
-.With UoW: UoW now manages database state
+.With UoW: UoW now manages database state(有了工作单元:UoW 现在管理数据库状态)
image::images/apwp_0602.png[]
=== The Unit of Work Collaborates with the Repository
+工作单元与仓储协作
//TODO (DS) do you talk anywhere about multiple repositories?
@@ -62,8 +80,10 @@ image::images/apwp_0602.png[]
((("Unit of Work pattern", "collaboration with repository")))
Let's see the unit of work (or UoW, which we pronounce "you-wow") in action. Here's how the service layer will look when we're finished:
+让我们看看工作单元(Unit of Work,简称 UoW,我们发音为“you-wow”)的实际应用。当我们完成后,服务层将如下所示:
+
[[uow_preview]]
-.Preview of unit of work in action (src/allocation/service_layer/services.py)
+.Preview of unit of work in action (src/allocation/service_layer/services.py)(工作单元实际应用的预览)
====
[source,python]
----
@@ -82,12 +102,15 @@ def allocate(
<1> We'll start a UoW as a context manager.
((("context manager", "starting Unit of Work as")))
+我们将以上下文管理器的形式启动一个工作单元。
<2> `uow.batches` is the batches repo, so the UoW provides us
access to our permanent storage.
((("storage", "permanent, UoW providing entrypoint to")))
+`uow.batches` 是批次仓储,因此,工作单元为我们提供了访问持久存储的途径。
<3> When we're done, we commit or roll back our work, using the UoW.
+当我们完成后,我们使用工作单元提交或回滚我们的工作。
((("object neighborhoods")))
((("collaborators")))
@@ -100,29 +123,41 @@ In responsibility-driven design, clusters of objects that collaborate in their
roles are called _object neighborhoods_, which is, in our professional opinion,
totally adorable.]
+工作单元充当我们持久化存储的单一入口,并且它会追踪加载了哪些对象以及它们的最新状态。脚注:
+你可能已经碰到过使用“协作者”一词来描述为了实现目标而协同工作的对象。在对象建模的意义上,工作单元和仓储就是协作者的一个很好的例子。
+在责任驱动设计中,那些在各自职责中协作的对象簇被称为 _对象邻域(object neighborhoods)_,从我们的专业角度来看,这个称呼简直可爱极了。
+
This gives us three useful things:
+这为我们提供了三大好处:
+
* A stable snapshot of the database to work with, so the
objects we use aren't changing halfway through an operation
+一个数据库的稳定快照,供我们使用,这样我们操作过程中使用的对象就不会中途发生变化。
* A way to persist all of our changes at once, so if something
goes wrong, we don't end up in an inconsistent state
+一种一次性持久化所有更改的方法,这样如果出现问题,我们就不会陷入不一致的状态。
* A simple API to our persistence concerns and a handy place
to get a repository
+一个简化的持久化操作接口,以及一个获取仓储的方便位置。
=== Test-Driving a UoW with Integration Tests
+通过集成测试对工作单元进行测试驱动开发
((("integration tests", "test-driving Unit of Work with")))
((("testing", "Unit of Work with integration tests")))
((("Unit of Work pattern", "test driving with integration tests")))
Here are our integration tests for the UOW:
+以下是我们针对工作单元的集成测试:
+
[[test_unit_of_work]]
-.A basic "round-trip" test for a UoW (tests/integration/test_uow.py)
+.A basic "round-trip" test for a UoW (tests/integration/test_uow.py)(针对工作单元的基础“往返”测试)
====
[source,python]
----
@@ -145,18 +180,23 @@ def test_uow_can_retrieve_a_batch_and_allocate_to_it(session_factory):
<1> We initialize the UoW by using our custom session factory
and get back a `uow` object to use in our `with` block.
+我们通过使用自定义的会话工厂初始化工作单元,并得到一个 `uow` 对象,以便在我们的 `with` 块中使用。
<2> The UoW gives us access to the batches repository via
`uow.batches`.
+工作单元通过 `uow.batches` 为我们提供访问批次仓储的途径。
<3> We call `commit()` on it when we're done.
+当我们完成后,我们调用 `commit()`。
((("SQL", "helpers for Unit of Work")))
For the curious, the `insert_batch` and `get_allocated_batch_ref` helpers look
like this:
+对于感兴趣的读者,`insert_batch` 和 `get_allocated_batch_ref` 辅助函数如下所示:
+
[[sql_helpers]]
-.Helpers for doing SQL stuff (tests/integration/test_uow.py)
+.Helpers for doing SQL stuff (tests/integration/test_uow.py)(用于处理 SQL 的辅助工具)
====
[source,python]
----
@@ -191,9 +231,13 @@ def get_allocated_batch_ref(session, orderid, sku):
is doing (double) assignment-unpacking to get the single value
back out of these two nested sequences.
It becomes readable once you've used it a few times!
+`[[orderlineid]] =` 语法或许显得有些过于巧妙,我们对此表示歉意。实际上,这里发生的事情是 `session.execute` 返回了一列行的列表,
+其中每一行是一个包含列值的元组;在我们的具体场景中,这是一个只有一行的列表,而这行是一个仅包含一个列值的元组。
+左侧的双重方括号完成了(双重)解包赋值,从这两个嵌套序列中提取出唯一的值。使用过几次后,这种写法就会变得清晰易读了!
=== Unit of Work and Its Context Manager
+工作单元及其上下文管理器
((("Unit of Work pattern", "and its context manager")))
((("context manager", "Unit of Work and", id="ix_ctxtmgr")))
@@ -201,9 +245,11 @@ def get_allocated_batch_ref(session, orderid, sku):
In our tests we've implicitly defined an interface for what a UoW needs to do. Let's make that explicit by using an abstract
base class:
+在我们的测试中,实际上已经隐式定义了工作单元需要实现的接口。现在,让我们通过使用抽象基类将其明确化:
+
[[abstract_unit_of_work]]
-.Abstract UoW context manager (src/allocation/service_layer/unit_of_work.py)
+.Abstract UoW context manager (src/allocation/service_layer/unit_of_work.py)(抽象工作单元上下文管理器)
====
[source,python]
[role="skip"]
@@ -226,24 +272,31 @@ class AbstractUnitOfWork(abc.ABC):
<1> The UoW provides an attribute called `.batches`, which will give us access
to the batches repository.
+工作单元提供了一个名为 `.batches` 的属性,它使我们能够访问批次仓储。
<2> If you've never seen a context manager, +++__enter__+++ and +++__exit__+++ are
the two magic methods that execute when we enter the `with` block and
when we exit it, respectively. They're our setup and teardown phases.
((("magic methods", "__enter__ and __exit__", secondary-sortas="enter")))
((("__enter__ and __exit__ magic methods", primary-sortas="enter and exit")))
+如果你从未见过上下文管理器,+++__enter__+++ 和 +++__exit__+++ 是两个魔法方法,
+分别在我们进入 `with` 块和退出 `with` 块时执行。它们对应我们的设置(setup)和销毁(teardown)阶段。
<3> We'll call this method to explicitly commit our work when we're ready.
+当我们准备好时,我们将调用此方法来显式提交我们的工作。
<4> If we don't commit, or if we exit the context manager by raising an error,
we do a `rollback`. (The rollback has no effect if `commit()` has been
called. Read on for more discussion of this.)
((("rollbacks")))
+如果我们没有调用 `commit()`,或者通过引发错误退出上下文管理器,我们将执行一次 `rollback`(回滚)。
+(如果已经调用了 `commit()`,回滚将不起作用。后续会有更多相关讨论。)
// TODO: bring this code listing back under test, remove `return self` from all the uows.
==== The Real Unit of Work Uses SQLAlchemy Sessions
+使用 SQLAlchemy 会话的真实工作单元
((("Unit of Work pattern", "and its context manager", "real UoW using SQLAlchemy session")))
((("databases", "SQLAlchemy adding session for Unit of Work")))
@@ -251,8 +304,10 @@ class AbstractUnitOfWork(abc.ABC):
The main thing that our concrete implementation adds is the
database session:
+我们的具体实现主要增加了一个数据库会话:
+
[[unit_of_work]]
-.The real SQLAlchemy UoW (src/allocation/service_layer/unit_of_work.py)
+.The real SQLAlchemy UoW (src/allocation/service_layer/unit_of_work.py)(真实的 SQLAlchemy 工作单元)
====
[source,python]
----
@@ -287,17 +342,21 @@ class SqlAlchemyUnitOfWork(AbstractUnitOfWork):
<1> The module defines a default session factory that will connect to Postgres,
but we allow that to be overridden in our integration tests so that we
can use SQLite instead.
+该模块定义了一个默认会话工厂,用于连接到 Postgres,但我们允许在集成测试中重写它,这样我们就可以改用 SQLite。
<2> The +++__enter__+++ method is responsible for starting a database session and instantiating
a real repository that can use that session.
((("__enter__ and __exit__ magic methods", primary-sortas="enter and exit")))
++++__enter__+++ 方法负责启动一个数据库会话并实例化一个能够使用该会话的真实仓储。
<3> We close the session on exit.
+在退出时,我们会关闭会话。
<4> Finally, we provide concrete `commit()` and `rollback()` methods that
use our database session.
((("commits", "commit method")))
((("rollbacks", "rollback method")))
+最后,我们提供了具体的 `commit()` 和 `rollback()` 方法来操作我们的数据库会话。
//IDEA: why not swap out db using os.environ?
// (EJ2) Could be a good idea to point out that this couples the unit of work to postgres.
@@ -310,14 +369,17 @@ class SqlAlchemyUnitOfWork(AbstractUnitOfWork):
==== Fake Unit of Work for Testing
+用于测试的伪工作单元
((("Unit of Work pattern", "and its context manager", "fake UoW for testing")))
((("faking", "FakeUnitOfWork for service layer testing")))
((("testing", "fake UoW for service layer testing")))
Here's how we use a fake UoW in our service-layer tests:
+以下是我们在服务层测试中使用伪工作单元的方式:
+
[[fake_unit_of_work]]
-.Fake UoW (tests/unit/test_services.py)
+.Fake UoW (tests/unit/test_services.py)(伪工作单元)
====
[source,python]
----
@@ -352,19 +414,25 @@ def test_allocate_returns_allocation():
<1> `FakeUnitOfWork` and `FakeRepository` are tightly coupled,
just like the real `UnitofWork` and `Repository` classes.
That's fine because we recognize that the objects are collaborators.
+`FakeUnitOfWork` 和 `FakeRepository` 紧密耦合,就像真实的 `UnitOfWork` 和 `Repository` 类一样。
+这没有问题,因为我们知道这些对象只是协作者。
<2> Notice the similarity with the fake `commit()` function
from `FakeSession` (which we can now get rid of). But it's
a substantial improvement because we're now [.keep-together]#faking# out
code that we wrote rather than third-party code. Some
people say, https://oreil.ly/0LVj3["Don't mock what you don't own"].
+注意它与 `FakeSession` 中伪造的 `commit()` 函数的相似之处(我们现在可以将其移除)。但这是一项重要的改进,
+因为我们现在是在 [.keep-together]#伪造# 我们自己编写的代码,而不是第三方代码。
+有些人会说, https://oreil.ly/0LVj3[“不要模拟你不拥有的东西”]。
<3> In our tests, we can instantiate a UoW and pass it to
our service layer, rather than passing a repository and a session.
This is considerably less cumbersome.
+在我们的测试中,我们可以实例化一个工作单元并将其传递给服务层,而不是传递一个仓储和一个会话。这要简单得多。
[role="nobreakinside less_space"]
-.Don't Mock What You Don't Own
+.Don't Mock What You Don't Own(不要模拟你不拥有的东西)
********************************************************************************
((("SQLAlchemy", "database session for Unit of Work", "not mocking")))
((("mocking", "don't mock what you don't own")))
@@ -373,6 +441,10 @@ Both of our fakes achieve the same thing: they give us a way to swap out our
persistence layer so we can run tests in memory instead of needing to
talk to a real database. The difference is in the resulting design.
+为什么我们对模拟工作单元比模拟会话更感到放心?
+我们的两个伪对象(Fake)实现了相同的目标:为我们提供一种替换持久化层的方式,这样我们可以在内存中运行测试,
+而无需与真实数据库交互。区别在于它们带来了不同的设计结果。
+
If we cared only about writing tests that run quickly, we could create mocks
that replace SQLAlchemy and use those throughout our codebase. The problem is
that `Session` is a complex object that exposes lots of persistence-related
@@ -381,28 +453,41 @@ the database, but that quickly leads to data access code being sprinkled all
over the codebase. To avoid that, we want to limit access to our persistence
layer so each component has exactly what it needs and nothing more.
+如果我们只关心编写运行速度快的测试,那么我们可以创建替代 SQLAlchemy 的模拟对象(mocks),并在整个代码库中使用它们。
+问题在于,`Session` 是一个复杂的对象,它暴露了许多与持久化相关的功能。使用 `Session` 可以随意对数据库进行查询,
+但这很容易导致数据访问代码散布在代码库的各个地方。为了避免这种情况,我们希望限制对持久化层的访问,以保证每个组件只拥有它需要的内容,不多也不少。
+
By coupling to the `Session` interface, you're choosing to couple to all the
complexity of SQLAlchemy. Instead, we want to choose a simpler abstraction and
use that to clearly separate responsibilities. Our UoW is much simpler
than a session, and we feel comfortable with the service layer being able to
start and stop units of work.
+通过耦合到 `Session` 接口,你实际上选择了与 SQLAlchemy 的所有复杂性进行耦合。而我们希望选择一个更简单的抽象,并以此清晰地分离职责。
+我们的 UoW 比 `Session` 简单得多,我们也对服务层能够启动和停止工作单元感到放心。
+
"Don't mock what you don't own" is a rule of thumb that forces us to build
these simple abstractions over messy subsystems. This has the same performance
benefit as mocking the SQLAlchemy session but encourages us to think carefully
about our designs.
((("context manager", "Unit of Work and", startref="ix_ctxtmgr")))
+
+“不要模拟你不拥有的东西”是一条经验法则,它促使我们在混乱的子系统之上构建这些简单的抽象。这不仅与模拟 SQLAlchemy 会话具有相同的性能优势,
+还鼓励我们认真思考我们的设计。
********************************************************************************
=== Using the UoW in the Service Layer
+在服务层中使用工作单元
((("Unit of Work pattern", "using UoW in service layer")))
((("service layer", "using Unit of Work in")))
Here's what our new service layer looks like:
+以下是新的服务层代码:
+
[[service_layer_with_uow]]
-.Service layer using UoW (src/allocation/service_layer/services.py)
+.Service layer using UoW (src/allocation/service_layer/services.py)(使用工作单元的服务层)
====
[source,python]
----
@@ -433,9 +518,11 @@ def allocate(
<1> Our service layer now has only the one dependency,
once again on an _abstract_ UoW.
((("dependencies", "service layer dependency on abstract UoW")))
+我们的服务层现在只有一个依赖,再次依赖于一个 _抽象的_ 工作单元。
=== Explicit Tests for Commit/Rollback Behavior
+针对提交/回滚行为的明确测试
((("commits", "explicit tests for")))
((("rollbacks", "explicit tests for")))
@@ -444,8 +531,10 @@ def allocate(
To convince ourselves that the commit/rollback behavior works, we wrote
a couple of tests:
+为让我们确信提交/回滚行为的正常运作,我们编写了几个测试:
+
[[testing_rollback]]
-.Integration tests for rollback behavior (tests/integration/test_uow.py)
+.Integration tests for rollback behavior (tests/integration/test_uow.py)(针对回滚行为的集成测试)
====
[source,python]
----
@@ -482,20 +571,28 @@ TIP: We haven't shown it here, but it can be worth testing some of the more
some of the tests to using the real database. It's convenient that our UoW
class makes that easy!
((("databases", "testing transactions against real database")))
+我们在这里没有展示,但测试一些更“晦涩”的数据库行为(比如事务)与“真实”数据库的交互可能是值得的——也就是说,使用相同的引擎。
+目前,我们暂时使用 SQLite 而不是 Postgres,但在 <> 中,我们会将部分测试切换为使用真实数据库。
+很方便的是,我们的 UoW 类让这一切变得简单!
=== Explicit Versus Implicit Commits
+显式提交与隐式提交
((("implicit versus explicit commits")))
((("commits", "explicit versus implicit")))
((("Unit of Work pattern", "explicit versus implicit commits")))
Now we briefly digress on different ways of implementing the UoW pattern.
+现在我们将简要讨论实现工作单元模式的不同方式。
+
We could imagine a slightly different version of the UoW that commits by default
and rolls back only if it spots an exception:
+我们可以设想一种稍有不同的工作单元实现,它默认提交,并且仅在发现异常时回滚:
+
[[uow_implicit_commit]]
-.A UoW with implicit commit... (src/allocation/unit_of_work.py)
+.A UoW with implicit commit... (src/allocation/unit_of_work.py)(一个具有隐式提交的工作单元...)
====
[source,python]
[role="skip"]
@@ -515,13 +612,17 @@ class AbstractUnitOfWork(abc.ABC):
====
<1> Should we have an implicit commit in the happy path?
+我们是否应该在正常路径中使用隐式提交?
<2> And roll back only on exception?
+并仅在发生异常时执行回滚?
It would allow us to save a line of code and to remove the explicit commit from our
client code:
+这将使我们节省一行代码,并从客户端代码中移除显式提交的操作:
+
[[add_batch_nocommit]]
-.\...would save us a line of code (src/allocation/service_layer/services.py)
+.\...would save us a line of code (src/allocation/service_layer/services.py)(...会为我们节省一行代码)
====
[source,python]
[role="skip"]
@@ -536,17 +637,26 @@ def add_batch(ref: str, sku: str, qty: int, eta: Optional[date], uow):
This is a judgment call, but we tend to prefer requiring the explicit commit
so that we have to choose when to flush state.
+这是一种判断上的选择,但我们倾向于要求显式提交,这样我们就必须明确地选择何时刷新状态。
+
Although we use an extra line of code, this makes the software safe by default.
The default behavior is to _not change anything_. In turn, that makes our code
easier to reason about because there's only one code path that leads to changes
in the system: total success and an explicit commit. Any other code path, any
exception, any early exit from the UoW's scope leads to a safe state.
+尽管我们多用了一行代码,但这使得软件在默认情况下是安全的。默认的行为是 _不做任何更改_。反过来,这让我们的代码更容易理解,
+因为只有一条代码路径会导致系统发生更改:完全成功并显式提交。任何其他代码路径、任何异常、任何提前退出工作单元范围的情况都不会导致不安全的状态。
+
Similarly, we prefer to roll back by default because
it's easier to understand; this rolls back to the last commit,
so either the user did one, or we blow their changes away. Harsh but simple.
+同样地,我们倾向于默认执行回滚,因为这样更容易理解;这会回滚到上一次提交的状态,所以要么用户进行了提交,要么我们就丢弃他们的更改。
+虽然严格,但却简单明了。
+
=== Examples: Using UoW to Group Multiple Operations into an Atomic Unit
+示例:使用工作单元将多个操作组合成一个原子单元
((("atomic operations", "using Unit of Work to group operations into atomic unit", id="ix_atomops")))
((("Unit of Work pattern", "using UoW to group multiple operations into atomic unit", id="ix_UoWatom")))
@@ -554,14 +664,19 @@ Here are a few examples showing the Unit of Work pattern in use. You can
see how it leads to simple reasoning about what blocks of code happen
together.
+以下是一些展示工作单元模式使用的示例。你可以看到它如何让我们能够简单地推理哪些代码块会一同执行。
+
==== Example 1: Reallocate
+示例 1:重新分配
((("Unit of Work pattern", "using UoW to group multiple operations into atomic unit", "reallocate function example")))
((("reallocate service function")))
Suppose we want to be able to deallocate and then reallocate orders:
+假设我们希望能够先取消分配订单,然后重新分配订单:
+
[[reallocate]]
-.Reallocate service function
+.Reallocate service function(重新分配服务函数)
====
[source,python]
[role="skip"]
@@ -581,19 +696,24 @@ def reallocate(
====
<1> If `deallocate()` fails, we don't want to call `allocate()`, obviously.
+显然,如果 `deallocate()` 失败,我们不希望调用 `allocate()`。
<2> If `allocate()` fails, we probably don't want to actually commit
the `deallocate()` either.
+如果 `allocate()` 失败,我们可能也不希望实际提交 `deallocate()` 的操作。
==== Example 2: Change Batch Quantity
+示例 2:更改批次数量
((("Unit of Work pattern", "using UoW to group multiple operations into atomic unit", "changing batch quantity example")))
Our shipping company gives us a call to say that one of the container doors
opened, and half our sofas have fallen into the Indian Ocean. Oops!
+我们的运输公司打电话告诉我们,其中一个集装箱的门打开了,我们一半的沙发掉进了印度洋。糟糕!
+
[[change_batch_quantity]]
-.Change quantity
+.Change quantity(更改数量)
====
[source,python]
[role="skip"]
@@ -615,9 +735,11 @@ def change_batch_quantity(
at any stage, we probably want to commit none of the changes.
((("Unit of Work pattern", "using UoW to group multiple operations into atomic unit", startref="ix_UoWatom")))
((("atomic operations", "using Unit of Work to group operations into atomic unit", startref="ix_atomops")))
+在这里,我们可能需要释放任意数量的行。如果在任何阶段出现失败,我们可能希望不提交任何更改。
=== Tidying Up the Integration Tests
+整理集成测试
((("testing", "Unit of Work with integration tests", "tidying up tests")))
((("Unit of Work pattern", "tidying up integration tests")))
@@ -625,6 +747,8 @@ We now have three sets of tests, all essentially pointing at the database:
_test_orm.py_, _test_repository.py_, and _test_uow.py_. Should we throw any
away?
+我们现在有三组测试,它们本质上都指向数据库:_test_orm.py_、_test_repository.py_ 和 _test_uow.py_。我们应该丢弃其中的某些测试吗?
+
====
[source,text]
[role="tree"]
@@ -653,8 +777,12 @@ it's doing are covered in _test_repository.py_. That last test, you might keep a
but we could certainly see an argument for just keeping everything at the highest
possible level of abstraction (just as we did for the unit tests).
+如果你认为某些测试从长期来看不会带来价值,你完全可以随时将它们删除。我们会说 _test_orm.py_ 主要是帮助我们学习 SQLAlchemy 的工具,
+因此从长期来看我们并不需要它,特别是当它的主要功能已经被 _test_repository.py_ 所覆盖时。而对于最后的那个测试 (_test_uow.py_),
+你可能会选择保留,但我们也完全可以接受只保留尽可能高层次抽象的测试(就像我们对单元测试所做的一样)的观点。
+
[role="nobreakinside less_space"]
-.Exercise for the Reader
+.Exercise for the Reader(读者练习)
******************************************************************************
For this chapter, probably the best thing to try is to implement a
UoW from scratch. The code, as always, is https://github.com/cosmicpython/code/tree/chapter_06_uow_exercise[on GitHub]. You could either follow the model we have quite closely,
@@ -665,101 +793,139 @@ or rollback on exit. If you feel like going all-functional rather than
messing about with all these classes, you could use `@contextmanager` from
`contextlib`.
+对于本章来说,可能最好的尝试是从头实现一个工作单元。
+代码一如既往地可以在 https://github.com/cosmicpython/code/tree/chapter_06_uow_exercise[GitHub 上] 找到。
+你可以选择非常贴近我们现有的示例模型,也可以尝试将 UoW 与上下文管理器分离开来进行实验(工作单元的职责是 `commit()`、`rollback()` 并提供 `.batches` 仓储,
+而上下文管理器的职责是进行初始化,然后在退出时执行提交或回滚操作)。如果你想完全采用函数式的方式,而不是处理这些类,你可以使用 `contextlib` 中的 `@contextmanager`。
+
We've stripped out both the actual UoW and the fakes, as well as paring back
the abstract UoW. Why not send us a link to your repo if you come up with
something you're particularly proud of?
+
+我们已经剥离了实际的工作单元和伪对象,同时也简化了抽象工作单元。如果你设计出令自己特别自豪的东西,为什么不将你的仓储链接发送给我们呢?
******************************************************************************
TIP: This is another example of the lesson from <>:
as we build better abstractions, we can move our tests to run against them,
which leaves us free to change the underlying details.
+这是来自<>的一课的另一个例子:当我们构建出更好的抽象时,
+我们可以让测试针对这些抽象运行,这使得我们能够自由地更改底层的细节。
=== Wrap-Up
+总结
((("Unit of Work pattern", "benefits of using")))
Hopefully we've convinced you that the Unit of Work pattern is useful, and
that the context manager is a really nice Pythonic way
of visually grouping code into blocks that we want to happen atomically.
+希望我们已经让你相信,工作单元模式是有用的,并且上下文管理器是一种非常优雅的 _Python_ 风格方式,
+可以直观地将我们希望原子化执行的代码分组到块中。
+
((("Session object")))
((("SQLAlchemy", "Session object")))
This pattern is so useful, in fact, that SQLAlchemy already uses a UoW
in the shape of the `Session` object. The `Session` object in SQLAlchemy is the way
that your application loads data from the database.
+事实上,这种模式非常有用,以至于 SQLAlchemy 已经在其 `Session` 对象中实现了一个工作单元。在 SQLAlchemy 中,
+`Session` 对象是你的应用程序从数据库加载数据的方式。
+
Every time you load a new entity from the database, the session begins to _track_
changes to the entity, and when the session is _flushed_, all your changes are
persisted together. Why do we go to the effort of abstracting away the SQLAlchemy session if it already implements the pattern we want?
+每次你从数据库加载一个新的实体时,`Session` 会开始 _追踪_ 该实体的更改,而当 `Session` 被 _刷新(flushed)_ 时,
+所有的更改都会被一起持久化。那么,既然 SQLAlchemy 的 `Session` 已经实现了我们想要的模式,为什么我们还要费力地对它进行抽象呢?
+
((("Unit of Work pattern", "pros and cons or trade-offs")))
<> discusses some of the trade-offs.
+<> 讨论了一些权衡取舍。
+
[[chapter_06_uow_tradeoffs]]
[options="header"]
-.Unit of Work pattern: the trade-offs
+.Unit of Work pattern: the trade-offs(工作单元模式:权衡取舍)
|===
-|Pros|Cons
+|Pros(优点)|Cons(缺点)
a|
* We have a nice abstraction over the concept of atomic operations, and the
context manager makes it easy to see, visually, what blocks of code are
grouped together atomically.
((("atomic operations", "Unit of Work as abstraction over")))
((("transactions", "Unit of Work and")))
+我们在原子操作的概念上拥有了一个优雅的抽象,上下文管理器使我们能够直观地看到哪些代码块被归组到了一起以原子方式执行。
* We have explicit control over when a transaction starts and finishes, and our
application fails in a way that is safe by default. We never have to worry
that an operation is partially committed.
+我们对事务的开始和结束有明确的控制,并且我们的应用程序默认情况下能以一种安全的方式失败。我们永远不必担心某个操作只被部分提交。
* It's a nice place to put all your repositories so client code can access them.
+这是一个放置所有仓储的好地方,这样客户端代码就可以访问它们。
* As you'll see in later chapters, atomicity isn't only about transactions; it
can help us work with events and the message bus.
+正如你将在后续章节中看到的,原子性不仅仅与事务有关;它还可以帮助我们处理事件和消息总线。
a|
* Your ORM probably already has some perfectly good abstractions around
atomicity. SQLAlchemy even has context managers. You can go a long way
just passing a session around.
+你的 ORM 可能已经有一些非常好的关于原子性的抽象。SQLAlchemy 甚至提供了上下文管理器。仅仅通过传递一个 session,你也能实现很多功能。
* We've made it look easy, but you have to think quite carefully about
things like rollbacks, multithreading, and nested transactions. Perhaps just
sticking to what Django or Flask-SQLAlchemy gives you will keep your life
simpler.
((("Unit of Work pattern", startref="ix_UoW")))
+虽然我们让这一切看起来很简单,但你必须非常仔细地考虑诸如回滚、多线程以及嵌套事务等问题。
+也许只是坚持使用 Django 或 Flask-SQLAlchemy 提供的功能会让你的生活更简单一些。
|===
For one thing, the Session API is rich and supports operations that we don't
want or need in our domain. Our `UnitOfWork` simplifies the session to its
essential core: it can be started, committed, or thrown away.
+首先,`Session` 的 API 非常丰富,并且支持我们在领域中不需要或不想要的操作。
+而我们的 `UnitOfWork` 将会话简化为其核心本质:它可以被启动、提交或丢弃。
+
For another, we're using the `UnitOfWork` to access our `Repository` objects.
This is a neat bit of developer usability that we couldn't do with a plain
SQLAlchemy `Session`.
+另一方面,我们使用 `UnitOfWork` 来访问我们的 `Repository` 对象。这是一种简洁的开发者易用性设计,
+而这是单纯使用 SQLAlchemy 的 `Session` 无法实现的。
+
[role="nobreakinside less_space"]
-.Unit of Work Pattern Recap
+.Unit of Work Pattern Recap(工作单元模式回顾)
*****************************************************************
((("Unit of Work pattern", "recap of important points")))
-The Unit of Work pattern is an abstraction around data integrity::
+The Unit of Work pattern is an abstraction around data integrity(工作单元模式是围绕数据完整性的一种抽象)::
It helps to enforce the consistency of our domain model, and improves
performance, by letting us perform a single _flush_ operation at the
end of an operation.
+它通过允许我们在操作结束时执行一次 _刷新(flush)_ 操作,帮助我们强制维护领域模型的一致性,并提高性能。
-It works closely with the Repository and Service Layer patterns::
+It works closely with the Repository and Service Layer patterns(它与仓储模式和服务层模式紧密协作)::
The Unit of Work pattern completes our abstractions over data access by
representing atomic updates. Each of our service-layer use cases runs in a
single unit of work that succeeds or fails as a block.
+工作单元模式通过表示原子更新来完善我们对数据访问的抽象。我们的每个服务层用例都运行在一个单独的工作单元中,该工作单元要么整体成功,要么整体失败。
-This is a lovely case for a context manager::
+This is a lovely case for a context manager(这正是一个上下文管理器的绝佳应用场景)::
Context managers are an idiomatic way of defining scope in Python. We can use a
context manager to automatically roll back our work at the end of a request,
which means the system is safe by default.
+上下文管理器是定义 _Python_ 中作用域的一种惯用方式。我们可以使用上下文管理器在请求结束时自动回滚我们的工作,这意味着系统默认是安全的。
-SQLAlchemy already implements this pattern::
+SQLAlchemy already implements this pattern(SQLAlchemy 已经实现了这种模式)::
We introduce an even simpler abstraction over the SQLAlchemy `Session` object
in order to "narrow" the interface between the ORM and our code. This helps
to keep us loosely coupled.
+我们在 SQLAlchemy 的 `Session` 对象之上引入了一个更简单的抽象,以便“收窄” ORM 和我们的代码之间的接口。这有助于保持松耦合。
*****************************************************************
@@ -770,6 +936,9 @@ implementation at the outside edge of the system. This lines up nicely with
SQLAlchemy's own
https://oreil.ly/tS0E0[recommendations]:
+最后,我们再次受到依赖倒置原则的推动:我们的服务层依赖于一个精简的抽象,而具体的实现则附加在系统的外围。这与 SQLAlchemy 自身的
+https://oreil.ly/tS0E0[推荐] 非常契合:
+
[quote, SQLALchemy "Session Basics" Documentation]
____
Keep the life cycle of the session (and usually the transaction) separate and
@@ -777,6 +946,9 @@ external. The most comprehensive approach, recommended for more substantial
applications, will try to keep the details of session, transaction, and
exception management as far as possible from the details of the program doing
its work.
+
+将会话(以及通常是事务)的生命周期分离并置于外部。对于更复杂的应用程序,推荐采用最全面的方法,
+该方法将尽量让会话、事务以及异常管理的细节远离实际程序逻辑的细节。
____
diff --git a/chapter_07_aggregate.asciidoc b/chapter_07_aggregate.asciidoc
index 593c920e..23e907da 100644
--- a/chapter_07_aggregate.asciidoc
+++ b/chapter_07_aggregate.asciidoc
@@ -1,5 +1,6 @@
[[chapter_07_aggregate]]
== Aggregates and Consistency Boundaries
+聚合与一致性边界
((("aggregates", "Product aggregate")))
((("consistency boundaries")))
@@ -12,23 +13,34 @@ discuss the concept of a _consistency boundary_ and show how making it
explicit can help us to build high-performance software without compromising
maintainability.
+在本章中,我们将重新审视我们的领域模型,讨论不变量和约束,并探讨领域对象是如何在概念上以及持久化存储中维护其自身的内部一致性的。
+我们会讨论 _一致性边界_ 的概念,并展示如何通过显式定义一致性边界来帮助我们构建高性能的软件,同时不牺牲可维护性。
+
<> shows a preview of where we're headed: we'll introduce
a new model object called `Product` to wrap multiple batches, and we'll make
the old `allocate()` domain service available as a method on `Product` instead.
+<> 展示了我们前进方向的预览:我们将引入一个名为 `Product` 的新模型对象,用来封装多个批次(batches),
+并且我们会将旧的 `allocate()` 领域服务改为在 `Product` 上作为一个方法提供。
+
[[maps_chapter_06]]
-.Adding the Product aggregate
+.Adding the Product aggregate(新增产品聚合)
image::images/apwp_0701.png[]
Why? Let's find out.
+为什么?让我们一探究竟。
+
[TIP]
====
The code for this chapter is in the chapter_07_aggregate branch
https://github.com/cosmicpython/code/tree/chapter_07_aggregate[on [.keep-together]#GitHub#]:
+本章的代码位于 chapter_07_aggregate 分支
+https://github.com/cosmicpython/code/tree/chapter_07_aggregate[在 [.keep-together]#GitHub#]:
+
----
git clone https://github.com/cosmicpython/code.git
cd code
@@ -40,35 +52,50 @@ git checkout chapter_06_uow
=== Why Not Just Run Everything in a Spreadsheet?
+为什么不直接在电子表格中运行所有内容?
((("domain model", "using spreadsheets instead of")))
((("spreadsheets, using instead of domain model")))
What's the point of a domain model, anyway? What's the fundamental problem
we're trying to address?
+那么,领域模型的意义究竟是什么?我们试图解决的核心问题是什么呢?
+
Couldn't we just run everything in a spreadsheet? Many of our users would be
[.keep-together]#delighted# by that. Business users _like_ spreadsheets because
they're simple, familiar, and yet enormously powerful.
+难道我们不能直接在电子表格中运行所有内容吗?许多用户会对此感到 [.keep-together]#非常高兴#。
+业务用户 _喜欢_ 电子表格,因为它们简单、熟悉,却又极其强大。
+
((("CSV over SMTP architecture")))
In fact, an enormous number of business processes do operate by manually sending
spreadsheets back and forth over email. This "CSV over SMTP" architecture has
low initial complexity but tends not to scale very well because it's difficult
to apply logic and maintain consistency.
+事实上,大量的业务流程确实是通过手动在电子邮件中传递电子表格来运作的。这种“通过 SMTP 传递 CSV”的架构初始复杂性很低,
+但往往难以很好地扩展,因为很难应用逻辑并维护一致性。
+
// IDEA: better examples?
Who is allowed to view this particular field? Who's allowed to update it? What
happens when we try to order –350 chairs, or 10,000,000 tables? Can an employee
have a negative salary?
+谁被允许查看这个特定字段?谁被允许更新它?当我们尝试订购 -350 把椅子或 10,000,000 张桌子时会发生什么?一个员工可以有负数的薪水吗?
+
These are the constraints of a system. Much of the domain logic we write exists
to enforce these constraints in order to maintain the invariants of the
system. The _invariants_ are the things that have to be true whenever we finish
an operation.
+这些是系统的约束条件。我们编写的大量领域逻辑是为了实施这些约束,以保持系统的不变量。
+_不变量_ 是指每当我们完成一次操作时,必须保持为真的那些事情。
+
=== Invariants, Constraints, and Consistency
+不变量、约束与一致性
((("invariants", "invariants, constraints, and consistency")))
((("domain model", "invariants, constraints, and consistency")))
@@ -76,11 +103,15 @@ The two words are somewhat interchangeable, but a _constraint_ is a
rule that restricts the possible states our model can get into, while an _invariant_
is defined a little more precisely as a condition that is always true.
+这两个词在某种程度上可以互换使用,但 _约束_ 是限制我们模型可能进入状态的规则,而 _不变量_ 更准确地被定义为始终为真的条件。
+
((("constraints")))
If we were writing a hotel-booking system, we might have the constraint that double
bookings are not allowed. This supports the invariant that a room cannot have more
than one booking for the same night.
+如果我们正在编写一个酒店预订系统,我们可能会有一个不允许重复预订的约束。这项约束支持了这样一个不变量:同一晚一间房间不能有多个预订。
+
((("consistency")))
Of course, sometimes we might need to temporarily _bend_ the rules. Perhaps we
need to shuffle the rooms around because of a VIP booking. While we're moving
@@ -89,11 +120,18 @@ should ensure that, when we're finished, we end up in a final consistent state,
where the invariants are met. If we can't find a way to accommodate all our guests,
we should raise an error and refuse to complete the operation.
+当然,有时我们可能需要暂时 _打破_ 规则。比如,因为 VIP 预订的原因,我们可能需要调整房间的分配。当我们在内存中移动预订时,
+可能会出现重复预订的情况,但我们的领域模型应该确保在操作完成时,最终会达到一个一致的状态,且所有不变量都得到满足。如果无法找到办法容纳所有的客人,我们应当抛出错误并拒绝完成操作。
+
Let's look at a couple of concrete examples from our business requirements; we'll start with this one:
+让我们来看几个源自业务需求的具体示例;我们从下面这个开始:
+
[quote, The business]
____
An order line can be allocated to only one batch at a time.
+
+一个订单项在同一时间只能分配给一个批次。
____
((("business rules", "invariants, constraints, and consistency")))
@@ -104,15 +142,23 @@ on two different batches for the same line, and currently, there's nothing
there to explicitly stop us from doing that.
+这是一个施加了不变量的业务规则。不变量是指一个订单项要么未分配到任何批次,要么只分配到一个批次,但绝不会超过一个批次。
+我们需要确保代码永远不会意外地对同一个订单项在两个不同的批次上调用 `Batch.allocate()`,而目前没有任何机制能够明确地阻止我们这么做。
+
==== Invariants, Concurrency, and Locks
+不变量、并发与锁
((("business rules", "invariants, concurrency, and locks")))
Let's look at another one of our business rules:
+让我们再来看另一个业务规则:
+
[quote, The business]
____
We can't allocate to a batch if the available quantity is less than the
quantity of the order line.
+
+如果批次的可用数量小于订单项的数量,我们就不能将其分配到该批次。
____
((("invariants", "invariants, concurrency, and locks")))
@@ -122,29 +168,43 @@ physical cushion, for example. Every time we update the state of the system, our
to ensure that we don't break the invariant, which is that the available
quantity must be greater than or equal to zero.
+这里的约束是,我们不能将超过批次可用库存的数量分配出去,以避免超卖库存,例如不会将同一个实际的靠垫分配给两个客户。每次更新系统状态时,
+我们的代码都需要确保不会破坏不变量,而不变量是:可用数量必须大于或等于零。
+
In a single-threaded, single-user application, it's relatively easy for us to
maintain this invariant. We can just allocate stock one line at a time, and
raise an error if there's no stock available.
+在单线程、单用户的应用程序中,维护这个不变量相对来说是比较容易的。我们只需一次分配一条订单项,如果没有足够的可用库存,就抛出一个错误即可。
+
((("concurrency")))
This gets much harder when we introduce the idea of _concurrency_. Suddenly we
might be allocating stock for multiple order lines simultaneously. We might
even be allocating order lines at the same time as processing changes to the
batches [.keep-together]#themselves#.
+当我们引入 _并发_ 的概念时,事情就变得困难得多了。突然之间,我们可能会同时为多个订单项分配库存。
+我们甚至可能在分配订单项的同时处理批次 [.keep-together]#本身# 的变更。
+
((("locks on database tables")))
We usually solve this problem by applying _locks_ to our database tables. This
prevents two operations from happening simultaneously on the same row or same
table.
+我们通常通过对数据库表应用 _锁_ 来解决这个问题。这可以防止两个操作在同一行或同一表上同时发生。
+
As we start to think about scaling up our app, we realize that our model
of allocating lines against all available batches may not scale. If we process
tens of thousands of orders per hour, and hundreds of thousands of
order lines, we can't hold a lock over the whole `batches` table for
every single one--we'll get deadlocks or performance problems at the very least.
+当我们开始考虑扩大应用程序的规模时,我们会意识到,将订单项分配到所有可用批次的这种模型可能无法扩展。
+如果我们每小时处理数万个订单和数十万个订单项,我们无法在每次操作时对整个 `batches` 表加锁——这样做至少会导致死锁或性能问题。
+
=== What Is an Aggregate?
+什么是聚合?
((("aggregates", "about")))
((("concurrency", "allowing for greatest degree of")))
@@ -155,20 +215,31 @@ system but allow for the greatest degree of concurrency. Maintaining our
invariants inevitably means preventing concurrent writes; if multiple users can
allocate `DEADLY-SPOON` at the same time, we run the risk of overallocating.
+OK,那么如果我们每次想分配一个订单项时都无法锁住整个数据库,那我们应该怎么做呢?我们希望保护系统的不变量,同时允许尽可能高的并发性。
+维护不变量不可避免地意味着要防止并发写操作;如果多个用户可以同时分配 `DEADLY-SPOON`,我们就面临着超额分配的风险。
+
On the other hand, there's no reason we can't allocate `DEADLY-SPOON` at the
same time as `FLIMSY-DESK`. It's safe to allocate two products at the
same time because there's no invariant that covers them both. We don't need them
to be consistent with each other.
+另一方面,我们完全可以在分配 `DEADLY-SPOON` 的同时分配 `FLIMSY-DESK`。同时分配两个产品是安全的,
+因为没有不变量将这两个产品关联在一起。我们不需要它们彼此之间保持一致性。
+
((("Aggregate pattern")))
((("domain driven design (DDD)", "Aggregate pattern")))
The _Aggregate_ pattern is a design pattern from the DDD community that helps us
to resolve this tension. An _aggregate_ is just a domain object that contains
other domain objects and lets us treat the whole collection as a single unit.
+_聚合(Aggregate)_模式是来自 DDD(领域驱动设计)社区的一种设计模式,可帮助我们解决这种矛盾。
+_聚合_ 只是一个包含其他领域对象的领域对象,并允许我们将整个集合视为一个单元来处理。
+
The only way to modify the objects inside the aggregate is to load the whole
thing, and to call methods on the aggregate itself.
+修改聚合内部对象的唯一方法是加载整个聚合,并调用聚合自身的方法。
+
((("collections")))
As a model gets more complex and grows more entity and value objects,
referencing each other in a tangled graph, it can be hard to keep track of who
@@ -178,6 +249,10 @@ the single entrypoint for modifying their related objects. It makes the system
conceptually simpler and easy to reason about if you nominate some objects to be
in charge of consistency for the others.
+随着模型变得越来越复杂并增加更多实体和值对象,这些对象之间可能会通过一个纠缠不清的图互相引用,这使得追踪谁可以修改什么变得困难。
+尤其是当模型中包含 _集合_(如我们的批次是一个集合)时,指定某些实体作为唯一的入口来修改与其相关的对象是一个好主意。
+如果指定某些对象负责其他对象的一致性,那么系统的概念会变得更加简单,也更容易推理。
+
For example, if we're building a shopping site, the Cart might make a good
aggregate: it's a collection of items that we can treat as a single unit.
Importantly, we want to load the entire basket as a single blob from our data
@@ -185,28 +260,42 @@ store. We don't want two requests to modify the basket at the same time, or we
run the risk of weird concurrency errors. Instead, we want each change to the
basket to run in a single database transaction.
+例如,如果我们在构建一个购物网站,那么购物车可能是一个很好的聚合:它是一个可以作为单一单元处理的商品集合。
+重要的是,我们希望将整个购物车作为一个整体从数据存储中加载。我们不希望两个请求同时修改购物车,否则可能会导致奇怪的并发错误。
+相反,我们希望对购物车的每一次修改都在一次单独的数据库事务中运行。
+
((("consistency boundaries")))
We don't want to modify multiple baskets in a transaction, because there's no
use case for changing the baskets of several customers at the same time. Each
basket is a single _consistency boundary_ responsible for maintaining its own
invariants.
+我们不希望在一个事务中修改多个购物车,因为没有同时更改多个客户购物车的用例。每个购物车是一个单独的 _一致性边界_,负责维护其自身的不变量。
+
[quote, Eric Evans, Domain-Driven Design blue book]
____
An AGGREGATE is a cluster of associated objects that we treat as a unit for the
purpose of data changes.
((("Evans, Eric")))
+
+聚合是一些相关对象的集合,我们将其视为一个单元以进行数据更改。
____
Per Evans, our aggregate has a root entity (the Cart) that encapsulates access
to items. Each item has its own identity, but other parts of the system will always
refer to the Cart only as an indivisible whole.
+根据 Evans 的定义,我们的聚合有一个根实体(购物车),它封装了对物品的访问。每个物品都有自己的标识,
+但系统的其他部分将始终将购物车视为一个不可分割的整体进行引用。
+
TIP: Just as we sometimes use pass:[_leading_underscores] to mark methods or functions
as "private," you can think of aggregates as being the "public" classes of our
model, and the rest of the entities and value objects as "private."
+就像我们有时使用 pass:[_前导下划线] 来标记方法或函数为“私有”一样,你可以将聚合视为我们模型中的“公共”类,
+而将其他实体和值对象视为“私有”。
=== Choosing an Aggregate
+选择一个聚合
((("performance", "impact of using aggregates")))
((("aggregates", "choosing an aggregrate", id="ix_aggch")))
@@ -217,38 +306,59 @@ software and prevent weird race issues. We want to draw a boundary around a
small number of objects—the smaller, the better, for performance—that have to
be consistent with one another, and we need to give this boundary a good name.
+在我们的系统中应该选择哪个聚合呢?这个选择在某种程度上是任意的,但却非常重要。聚合将成为我们确保每个操作以一致状态结束的边界。
+这有助于我们更好地理解软件并防止奇怪的竞争问题。我们希望围绕一小部分必须彼此保持一致的对象划定边界——对象越少越好,
+以提高性能——并且我们需要为这个边界起一个合适的名字。
+
((("batches", "collection of")))
The object we're manipulating under the covers is `Batch`. What do we call a
collection of batches? How should we divide all the batches in the system into
discrete islands of consistency?
+我们在底层操作的对象是 `Batch`。那我们该如何称呼一组批次呢?我们又该如何将系统中的所有批次划分为一些独立的一致性单元呢?
+
We _could_ use `Shipment` as our boundary. Each shipment contains several
batches, and they all travel to our warehouse at the same time. Or perhaps we
could use `Warehouse` as our boundary: each warehouse contains many batches,
and counting all the stock at the same time could make sense.
+我们 _可以_ 使用 `货运(Shipment)` 作为边界。每个货运包含多个批次,它们会同时运送到我们的仓库。
+或者,我们也可以使用 `仓库(Warehouse)` 作为边界:每个仓库包含许多批次,同时统计所有库存可能是合理的选择。
+
Neither of these concepts really satisfies us, though. We should be able to
allocate `DEADLY-SPOONs` or `FLIMSY-DESKs` in one go, even if they're not in the
same warehouse or the same shipment. These concepts have the wrong granularity.
+然而,这些概念都无法真正满足我们的需求。我们应该能够一次性分配 `DEADLY-SPOON` 或 `FLIMSY-DESK`,即使它们不在同一个仓库或同一个货运中。
+这些概念的粒度并不合适。
+
When we allocate an order line, we're interested only in batches
that have the same SKU as the order line. Some sort of concept like
`GlobalSkuStock` could work: a collection of all the batches for a given SKU.
+当我们分配一个订单项时,我们只关心与该订单项有相同 SKU 的批次。一种像 `全局SKU库存(GlobalSkuStock)` 的概念可能会
+奏效:即给定 SKU 的所有批次的集合。
+
It's an unwieldy name, though, so after some bikeshedding via `SkuStock`, `Stock`,
`ProductStock`, and so on, we decided to simply call it `Product`—after all,
that was the first concept we came across in our exploration of the
domain language back in <>.
+不过,这个名字略显笨拙,所以经过一番关于 `Sku库存(SkuStock)`、`库存(Stock)`、`产品库存(ProductStock)` 等名称的讨论后,
+我们最终决定简单地称它为 `产品(Product)`——毕竟, 这是我们在探索领域语言时最早接触到的概念之一,早在 <> 中就已经提到过了。
+
((("allocate service", "allocating against all batches with")))
((("batches", "allocating against all batches using domain service")))
So the plan is this: when we want to allocate an order line, instead of
<>, where we look up all the `Batch` objects in
the world and pass them to the `allocate()` domain service...
+所以计划是这样的:当我们想要分配一个订单项时,与其采用 <> 中的方式,
+即查找系统中所有的 `批次(Batch)` 对象并将它们传递给 `allocate()` 领域服务...
+
[role="width-60"]
[[before_aggregates_diagram]]
-.Before: allocate against all batches using the domain service
+.Before: allocate against all batches using the domain service(之前:使用领域服务在所有批次中进行分配)
image::images/apwp_0702.png[]
[role="image-source"]
----
@@ -300,9 +410,12 @@ allocate --> allocate_domain_service: allocate(orderline, batches)
of all the batches _for that SKU_, and we can call a `.allocate()` method on that
instead.
+...我们将进入 <> 所描述的世界,在这个世界中,每个订单项的特定 SKU 会对应一个新的 `Product` 对象,
+它负责该 SKU 的所有批次。然后,我们可以直接在这个对象上调用 `.allocate()` 方法。
+
[role="width-75"]
[[after_aggregates_diagram]]
-.After: ask Product to allocate against its batches
+.After: ask Product to allocate against its batches(之后:让产品在其批次中进行分配)
image::images/apwp_0703.png[]
[role="image-source"]
----
@@ -350,9 +463,11 @@ Product o- Batch: has
((("Product object", "code for")))
Let's see how that looks in code form:
+让我们看看这在代码中的样子:
+
[role="pagebreak-before"]
[[product_aggregate]]
-.Our chosen aggregate, Product (src/allocation/domain/model.py)
+.Our chosen aggregate, Product (src/allocation/domain/model.py)(我们选择的聚合——产品)
====
[source,python]
[role="non-head"]
@@ -373,12 +488,15 @@ class Product:
====
<1> ``Product``'s main identifier is the `sku`.
+`Product` 的主要标识符是 `sku`。
<2> Our `Product` class holds a reference to a collection of `batches` for that SKU.
((("allocate service", "moving to be a method on Product aggregate")))
+我们的 `Product` 类保存了对该 SKU 的 `batches` 集合的引用。
<3> Finally, we can move the `allocate()` domain service to
be a method on the [.keep-together]#`Product`# aggregate.
+最后,我们可以将 `allocate()` 领域服务转移为 [.keep-together]#`Product`# 聚合上的一个方法。
// IDEA (hynek): random nitpick: exceptions denoting errors should be
// named *Error. Are you doing this to save space in the listing?
@@ -395,17 +513,21 @@ NOTE: This `Product` might not look like what you'd expect a `Product`
of a product in one app can be very different from another.
See the following sidebar for more discussion.
((("bounded contexts", "product concept and")))
+这个 `Product` 可能看起来不像你期望的那种 `Product` 模型。没有价格、没有描述、没有尺寸。而我们的分配服务并不关心这些东西。
+这正是限界上下文(bounded contexts)的力量;一个应用程序中的产品概念可以与另一个应用程序中的产品概念非常不同。请参阅以下侧栏获取更多讨论。
[role="nobreakinside less_space"]
[[bounded_contexts_sidebar]]
-.Aggregates, Bounded Contexts, and Microservices
+.Aggregates, Bounded Contexts, and Microservices(聚合、限界上下文和微服务)
*******************************************************************************
((("bounded contexts")))
One of the most important contributions from Evans and the DDD community
is the concept of
https://martinfowler.com/bliki/BoundedContext.html[_bounded contexts_].
+Evans 和 DDD 社区最重要的贡献之一是 https://martinfowler.com/bliki/BoundedContext.html[_限界上下文_] 的概念。
+
((("domain driven design (DDD)", "bounded contexts")))
In essence, this was a reaction against attempts to capture entire businesses
into a single model. The word _customer_ means different things to people
@@ -417,30 +539,46 @@ all the use cases, it's better to have several models, draw boundaries
around each context, and handle the translation between different contexts
explicitly.
+本质上,这是一种对试图将整个业务捕获到一个单一模型中的做法的反应。_客户_ 这个词对于销售、客户服务、物流、技术支持等人员来说有着不同的含义。
+在一个上下文中需要的属性在另一个上下文中可能毫无意义;更麻烦的是,同样的术语在不同的上下文中可能有完全不同的意义。
+与其试图构建一个单一模型(或类,或数据库)以满足所有用例,不如为不同的用例构建多个模型,为每个上下文划定边界,并显式地处理不同上下文之间的转换。
+
((("microservices", "bounded contexts and")))
This concept translates very well to the world of microservices, where each
microservice is free to have its own concept of "customer" and its own rules for
translating that to and from other microservices it integrates with.
+这个概念非常适合应用于微服务的世界。在微服务中,每个微服务都可以拥有它自己对“客户”的定义,以及其自身的规则来处理它与其他微服务之间的转换。
+
In our example, the allocation service has `Product(sku, batches)`,
whereas the ecommerce will have `Product(sku, description, price, image_url,
dimensions, etc...)`. As a rule of thumb, your domain models should
include only the data that they need for performing calculations.
+在我们的示例中,分配服务的模型是 `Product(sku, batches)`,
+而电商系统的模型可能是 `Product(sku, description, price, image_url, dimensions, etc...)`。
+通常来说,你的领域模型应仅包含它们执行计算所需的数据。
+
Whether or not you have a microservices architecture, a key consideration
in choosing your aggregates is also choosing the bounded context that they
will operate in. By restricting the context, you can keep your number of
aggregates low and their size manageable.
+无论你是否采用微服务架构,选择聚合时的一个关键考虑因素是选择它们将要运行的限界上下文。通过限制上下文,你可以减少聚合的数量,并使其规模易于管理。
+
((("aggregates", "choosing an aggregrate", startref="ix_aggch")))
Once again, we find ourselves forced to say that we can't give this issue
the treatment it deserves here, and we can only encourage you to read up on it
elsewhere. The Fowler link at the start of this sidebar is a good starting point, and either
(or indeed, any) DDD book will have a chapter or more on bounded contexts.
+再一次,我们不得不说,无法在这里对这一主题进行应有的深入讨论,我们只能鼓励你在其他地方深入阅读。
+此侧栏开头提供的 Fowler 链接是一个不错的起点,任何一本(或者确切地说,任何)DDD 书籍中都会有一章或更多章节专门讨论限界上下文。
+
*******************************************************************************
=== One Aggregate = One Repository
+一个聚合 = 一个仓储
((("aggregates", "one aggregrate = one repository")))
((("repositories", "one aggregrate = one repository")))
@@ -449,17 +587,23 @@ that they are the only entities that are publicly accessible to the outside
world. In other words, the only repositories we are allowed should be
repositories that return aggregates.
+一旦你将某些实体定义为聚合,我们就需要遵循一个规则:它们是唯一对外部世界公开访问的实体。
+换句话说,我们唯一允许的仓储应该是那些返回聚合的仓储。
+
NOTE: The rule that repositories should only return aggregates is the main place
where we enforce the convention that aggregates are the only way into our
domain model. Be wary of breaking it!
+仓储只应返回聚合的这一规则是我们强制执行“聚合是进入领域模型唯一途径”这一约定的主要方式。请谨慎打破这一规则!
((("Unit of Work pattern", "UoW and product repository")))
((("ProductRepository object")))
In our case, we'll switch from `BatchRepository` to `ProductRepository`:
+在我们的例子中,我们将从使用 `BatchRepository` 切换为使用 `ProductRepository`:
+
[[new_uow_and_repository]]
-.Our new UoW and repository (unit_of_work.py and repository.py)
+.Our new UoW and repository (unit_of_work.py and repository.py)(我们新的工作单元和仓储)
====
[source,python]
[role="skip"]
@@ -490,8 +634,11 @@ pattern means we don't have to worry about that yet. We can just use
our `FakeRepository` and then feed through the new model into our service
layer to see how it looks with `Product` as its main entrypoint:
+ORM 层需要进行一些调整,以便正确的批次能够自动加载并关联到 `Product` 对象上。值得庆幸的是,仓储模式让我们暂时无需担心这些问题。
+我们可以直接使用我们的 `FakeRepository`,然后将新模型传递到服务层,来看看以 `Product` 作为主要入口点时的表现:
+
[[service_layer_uses_products]]
-.Service layer (src/allocation/service_layer/services.py)
+.Service layer (src/allocation/service_layer/services.py)(服务层)
====
[source,python]
----
@@ -524,6 +671,7 @@ def allocate(
====
=== What About Performance?
+那么性能如何呢?
((("performance", "impact of using aggregates")))
((("aggregates", "performance and")))
@@ -532,20 +680,31 @@ to have high-performance software, but here we are loading _all_ the batches whe
we only need one. You might expect that to be inefficient, but there are a few
reasons why we're comfortable here.
+我们已经多次提到,使用聚合建模是因为我们想要构建高性能的软件。但现在我们在只需要一个批次时却加载了 _所有_ 的批次。
+你可能会觉得这样做效率不高,但这里有几个理由让我们对此感到放心。
+
First, we're purposefully modeling our data so that we can make a single
query to the database to read, and a single update to persist our changes. This
tends to perform much better than systems that issue lots of ad hoc queries. In
systems that don't model this way, we often find that transactions slowly
get longer and more complex as the software evolves.
+首先,我们有意对数据进行建模,以便能够通过单一查询从数据库读取数据,并通过单次更新来持久化我们的更改。
+这种方式的性能通常远胜于那些发出大量临时查询的系统。在未按这种方式建模的系统中,我们经常发现事务随着软件的发展会变得越来越长、越来越复杂。
+
Second, our data structures are minimal and comprise a few strings and
integers per row. We can easily load tens or even hundreds of batches in a few
milliseconds.
+其次,我们的数据结构是极简的,每行仅包含少量字符串和整数。我们可以轻松地在几毫秒内加载数十甚至数百个批次。
+
Third, we expect to have only 20 or so batches of each product at a time.
Once a batch is used up, we can discount it from our calculations. This means
that the amount of data we're fetching shouldn't get out of control over time.
+第三,我们预计每种产品同时只有大约 20 个批次。一旦某个批次被用完,就可以将其从我们的计算中排除。
+这意味着我们获取的数据量不会随着时间的推移而失控。
+
If we _did_ expect to have thousands of active batches for a product, we'd have
a couple of options. For one, we could use lazy-loading for the batches in a
product. From the perspective of our code, nothing would change, but in the
@@ -553,30 +712,45 @@ background, SQLAlchemy would page through data for us. This would lead to more
requests, each fetching a smaller number of rows. Because we need to find only a
single batch with enough capacity for our order, this might work pretty well.
+如果我们 _确实_ 预计某个产品会有数千个活动批次,我们会有几个选项可供选择。例如,我们可以对产品中的批次使用延迟加载(lazy-loading)。
+从我们代码的角度来看,这不会引起任何变化,但在后台,SQLAlchemy 会为我们分页加载数据。这将导致多次请求,每次请求获取较少的行数。
+因为我们只需要找到一个能够满足订单容量的批次,这种方法可能会非常有效。
+
[role="nobreakinside less_space"]
-.Exercise for the Reader
+.Exercise for the Reader(读者练习)
******************************************************************************
((("aggregates", "exercise for the reader")))
You've just seen the main top layers of the code, so this shouldn't be too hard,
but we'd like you to implement the `Product` aggregate starting from `Batch`,
just as we did.
+你刚刚看到了代码的主要顶层结构,所以这应该不会太难。我们希望你从`Batch`开始实现`Product`聚合,就像我们做的一样。
+
Of course, you could cheat and copy/paste from the previous listings, but even
if you do that, you'll still have to solve a few challenges on your own,
like adding the model to the ORM and making sure all the moving parts can
talk to each other, which we hope will be instructive.
+当然,你可以通过复制/粘贴之前的代码清单来“作弊”,但即使这样,你仍然需要自行解决一些挑战,
+比如将模型添加到 ORM 中,并确保所有组件能够相互通信。我们希望这些步骤对你有所启发。
+
You'll find the code https://github.com/cosmicpython/code/tree/chapter_07_aggregate_exercise[on GitHub].
We've put in a "cheating" implementation in the delegates to the existing
`allocate()` function, so you should be able to evolve that toward the real
thing.
+你可以在 https://github.com/cosmicpython/code/tree/chapter_07_aggregate_exercise[GitHub上] 找到代码。
+我们在委托中放入了一个“作弊”的实现,委托给了现有的 `allocate()` 函数,所以你应该能够将其逐步完善为真正的实现。
+
((("pytest", "@pytest.skip")))
We've marked a couple of tests with `@pytest.skip()`. After you've read the
rest of this chapter, come back to these tests to have a go at implementing
version numbers. Bonus points if you can get SQLAlchemy to do them for you by
magic!
+我们使用 `@pytest.skip()` 标记了几个测试。在你阅读完本章的剩余部分后,可以回过头来尝试实现版本号。
+如果你能让 SQLAlchemy 魔法般地为你完成这些工作,那就额外加分!
+
******************************************************************************
If all else failed, we'd just look for a different aggregate. Maybe we could
@@ -586,8 +760,13 @@ to help manage some technical constraints around consistency and performance.
There isn't _one_ correct aggregate, and we should feel comfortable changing our
minds if we find our boundaries are causing performance woes.
+如果其他方法都失败了,我们可以尝试寻找一个不同的聚合方式。也许我们可以按照区域或仓储来划分批次,或者围绕发货的概念重新设计我们的数据访问策略。
+聚合模式的目的是帮助应对一致性和性能相关的一些技术约束。并不存在 _唯一_ 正确的聚合方式,如果我们发现定义的边界导致性能问题,
+我们应该随时调整思路,不拘泥于现有方案。
+
=== Optimistic Concurrency with Version Numbers
+使用版本号的乐观并发控制
((("concurrency", "optimistic concurrency with version numbers", id="ix_concopt")))
((("optimistic concurrency with version numbers", id="ix_opticonc")))
@@ -596,17 +775,23 @@ We have our new aggregate, so we've solved the conceptual problem of choosing
an object to be in charge of consistency boundaries. Let's now spend a little
time talking about how to enforce data integrity at the database level.
+我们已经有了新的聚合,因此解决了选择负责一致性边界对象的概念性问题。现在,让我们花点时间讨论如何在数据库层面强制执行数据完整性。
+
NOTE: This section has a lot of implementation details; for example, some of it
is Postgres-specific. But more generally, we're showing one way of managing
concurrency issues, but it is just one approach. Real requirements in this
area vary a lot from project to project. You shouldn't expect to be able to
copy and paste code from here into production.
((("PostgreSQL", "managing concurrency issues")))
+本节包含许多实现细节,例如,其中一些是特定于 Postgres 的。但更普遍来说,我们展示了一种管理并发问题的方法,不过这仅仅是一种方法。
+实际需求在这一领域因项目而异。因此,你不应该期望能够将这里的代码直接复制粘贴到生产环境中使用。
((("locks on database tables", "optimistic locking")))
We don't want to hold a lock over the entire `batches` table, but how will we
implement holding a lock over just the rows for a particular SKU?
+我们不希望对整个 `batches` 表持有锁,但我们将如何实现仅对特定 SKU 的行持有锁呢?
+
((("version numbers", "in the products table, implementing optimistic locking")))
One answer is to have a single attribute on the `Product` model that acts as a marker for
the whole state change being complete and to use it as the single resource
@@ -616,6 +801,10 @@ the `allocations` tables, we force both to also try to update the
`version_number` in the `products` table, in such a way that only one of them
can win and the world stays consistent.
+一个解决方法是在 `Product` 模型上设置一个单一属性,用作整个状态变更完成的标记,并将其作为并发工作者争用的唯一资源。
+如果两个事务同时读取了 `batches` 的状态,并且都试图更新 `allocations` 表,
+我们可以强制它们同时尝试更新 `products` 表中的 `version_number`,以确保只有其中一个能成功,保持系统的一致性。
+
((("transactions", "concurrent, attempting update on Product")))
((("Product object", "two transactions attempting concurrent update on")))
<> illustrates two concurrent
@@ -625,14 +814,20 @@ in order to modify a state. But we set up our database integrity
rules such that only one of them is allowed to `commit` the new `Product`
with `version=4`, and the other update is rejected.
+<> 图解说明了两个并发事务同时进行读取操作,因此它们会看到一个 `Product`,例如,`version=3`。
+它们都会调用 `Product.allocate()` 来修改状态。但我们设置了数据库完整性规则,
+以确保只有其中一个事务被允许 `commit` 带有 `version=4` 的新 `Product`,而另一个更新会被拒绝。
+
TIP: Version numbers are just one way to implement optimistic locking. You
could achieve the same thing by setting the Postgres transaction isolation
level to `SERIALIZABLE`, but that often comes at a severe performance cost.
Version numbers also make implicit concepts explicit.
((("PostgreSQL", "SERIALIZABLE transaction isolation level")))
+版本号只是实现乐观锁的一种方式。你也可以通过将 Postgres 的事务隔离级别设置为 `SERIALIZABLE` 来实现相同的效果,
+但这样往往会带来严重的性能开销。而版本号则能将隐含的概念显式化。
[[version_numbers_sequence_diagram]]
-.Sequence diagram: two transactions attempting a concurrent update on [.keep-together]#`Product`#
+.Sequence diagram: two transactions attempting a concurrent update on [.keep-together]#`Product`#(时序图:两个事务尝试并发更新产品)
image::images/apwp_0704.png[]
[role="image-source"]
----
@@ -664,7 +859,7 @@ Database -[#red]>x Transaction2: Error! version is already 4
[role="nobreakinside less_space"]
-.Optimistic Concurrency Control and Retries
+.Optimistic Concurrency Control and Retries(乐观并发控制和重试)
********************************************************************************
What we've implemented here is called _optimistic_ concurrency control because
@@ -673,6 +868,9 @@ make changes to the database. We think it's unlikely that they will conflict
with each other, so we let them go ahead and just make sure we have a way to
notice if there is a [.keep-together]#problem#.
+我们在这里实现的被称为 _乐观_ 并发控制,因为我们的默认假设是,当两个用户想要对数据库进行修改时,一切都会正常进行。
+我们认为他们发生冲突的可能性很低,因此我们允许他们继续操作,只需确保我们有办法注意到是否存在 [.keep-together]#问题#。
+
((("pessimistic concurrency")))
((("locks on database tables", "pessimistic locking")))
((("SELECT FOR UPDATE statement")))
@@ -683,12 +881,19 @@ the whole `batches` table, or using ++SELECT FOR UPDATE++—we're pretending
that we've ruled those out for performance reasons, but in real life you'd
want to do some evaluations and measurements of your own.
+_悲观_ 并发控制基于以下假设:两个用户会引发冲突,因此我们希望在所有情况下都防止冲突发生,于是锁定所有内容以确保安全。
+在我们的示例中,这将意味着锁定整个 `batches` 表,或者使用 ++SELECT FOR UPDATE++。我们假设由于性能原因已经排除了这些选项,
+但在实际情况下,你可能需要进行一些评估和测量来决定最佳方案。
+
((("locks on database tables", "optimistic locking")))
With pessimistic locking, you don't need to think about handling failures
because the database will prevent them for you (although you do need to think
about deadlocks). With optimistic locking, you need to explicitly handle
the possibility of failures in the (hopefully unlikely) case of a clash.
+使用悲观锁定时,你无需考虑处理失败的情况,因为数据库会为你防止这些失败(不过你需要考虑死锁问题)。而使用乐观锁定时,
+你需要显式地处理在(希望是低概率的)冲突情况下可能出现的失败情况。
+
((("retries", "optimistic concurrency control and")))
The usual way to handle a failure is to retry the failed operation from the
beginning. Imagine we have two customers, Harry and Bob, and each submits an order
@@ -699,28 +904,44 @@ version 2 and tries to allocate again. If there is enough stock left, all is
well; otherwise, he'll receive `OutOfStock`. Most operations can be retried this
way in the case of a concurrency problem.
+处理失败的常见方式是从头开始重试失败的操作。想象一下,有两位客户,Harry 和 Bob,他们各自提交了一个 `SHINY-TABLE` 的订单。
+两个线程都加载了版本为 1 的产品并分配了库存。数据库阻止了并发更新,结果 Bob 的订单因为错误而失败。当我们 _重试_ 操作时,
+Bob 的订单会加载版本为 2 的产品并再次尝试分配。如果还有足够的库存,一切就会正常完成;否则,他将收到 `OutOfStock` 的通知。
+在大多数情况下,如果出现并发问题,操作都可以通过这种方式进行重试。
+
Read more on retries in <> and <>.
+
+关于重试的更多内容,请参阅 <> 和 <>。
********************************************************************************
==== Implementation Options for Version Numbers
+实现版本号的选项
+
((("Product object", "version numbers implemented on")))
((("version numbers", "implementation options for")))
There are essentially three options for implementing version numbers:
+实现版本号本质上有三种选项:
+
1. `version_number` lives in the domain; we add it to the `Product` constructor,
and `Product.allocate()` is responsible for incrementing it.
+`version_number` 存在于领域中;我们将其添加到 `Product` 构造函数中,并由 `Product.allocate()` 负责对其进行递增。
2. The service layer could do it! The version number isn't _strictly_ a domain
concern, so instead our service layer could assume that the current version number
is attached to `Product` by the repository, and the service layer will increment it
before it does the `commit()`.
+服务层也可以负责!版本号并不是 _严格_ 的领域关注点,因此我们的服务层可以假设当前版本号是由仓储附加到 `Product` 上的,
+而服务层会在执行 `commit()` 之前递增它。
3. Since it's arguably an infrastructure concern, the UoW and repository
could do it by magic. The repository has access to version numbers for any
products it retrieves, and when the UoW does a commit, it can increment the
version number for any products it knows about, assuming them to have changed.
+由于可以说版本号是一个基础设施层的关注点,工作单元和仓储可以通过“魔法”来实现它。仓储能够访问它检索到的任何产品的版本号,
+而当工作单元执行 `commit` 时,它可以对它已知的任何产品的版本号进行递增,假设这些产品已经发生了更改。
Option 3 isn't ideal, because there's no real way of doing it without having to
assume that _all_ products have changed, so we'll be incrementing version numbers
@@ -728,14 +949,21 @@ when we don't have to.footnote:[Perhaps we could get some ORM/SQLAlchemy magic t
us when an object is dirty, but how would that work in the generic case—for example, for a
`CsvRepository`?]
+选项3并不理想,因为没有实际的方式可以实现它而不假设 _所有_ 的产品都已被更改,因此我们会在不需要的情况下递增版本号。
+脚注:[或许我们可以借助一些 ORM/SQLAlchemy 的魔法来告诉我们对象何时被修改,但在通用情况下这又该如何工作呢——例如对于一个 `CsvRepository`?]
+
Option 2 involves mixing the responsibility for mutating state between the service
layer and the domain layer, so it's a little messy as well.
+选项2将状态变更的职责混合到了服务层和领域层之间,因此也有点混乱。
+
So in the end, even though version numbers don't _have_ to be a domain concern,
you might decide the cleanest trade-off is to put them in the domain:
+因此,最终,即使版本号不 _一定_ 是领域的关注点,你可能会决定最干净的权衡是将它们放入领域中:
+
[[product_aggregate_with_version_number]]
-.Our chosen aggregate, Product (src/allocation/domain/model.py)
+.Our chosen aggregate, Product (src/allocation/domain/model.py)(我们选择的聚合:产品)
====
[source,python]
----
@@ -757,6 +985,7 @@ class Product:
====
<1> There it is!
+就是这样!
TIP: If you're scratching your head at this version number business, it might
help to remember that the _number_ isn't important. What's important is
@@ -767,9 +996,12 @@ TIP: If you're scratching your head at this version number business, it might
((("concurrency", "optimistic concurrency with version numbers", startref="ix_concopt")))
((("optimistic concurrency with version numbers", startref="ix_opticonc")))
((("aggregates", "optimistic concurrency with version numbers", startref="ix_aggopticon")))
+如果你对这个版本号的概念感到困惑,记住这一点可能会有所帮助:_版本号本身并不重要_。重要的是,每当我们对 `Product` 聚合进行修改时,
+`Product` 数据库行都会被更新。版本号是一种简单且易于理解的方式,用来表示每次写操作都会发生变化的事物,但它同样也可以是每次生成的随机 UUID。
=== Testing for Our Data Integrity Rules
+测试我们的数据完整性规则
((("data integrity", "testing for", id="ix_daint")))
((("aggregates", "testing for data integrity rules", id="ix_aggtstdi")))
@@ -778,6 +1010,8 @@ Now to make sure we can get the behavior we want: if we have two
concurrent attempts to do allocation against the same `Product`, one of them
should fail, because they can't both update the version number.
+现在要确保我们能够获得所需的行为:如果有两个并发操作试图对同一个 `Product` 进行分配,其中一个操作应该失败,因为它们无法同时更新版本号。
+
((("time.sleep function")))
((("time.sleep function", "reproducing concurrency behavior with")))
((("concurrency", "reproducing behavior with time.sleep function")))
@@ -788,8 +1022,11 @@ in our use case, but it's not the most reliable or efficient way to reproduce
concurrency bugs. Consider using semaphores or similar synchronization primitives
shared between your threads to get better guarantees of behavior.]
+首先,让我们通过一个函数来模拟一个“慢”事务,该函数会先进行分配操作,然后显式地调用 sleep:脚注:[在我们的用例中,`time.sleep()` 很有效,
+但它并不是重现并发错误最可靠或最高效的方法。可以考虑使用信号量(semaphores)或类似的线程间同步原语,以更好地保证行为的一致性。]
+
[[time_sleep_thread]]
-.time.sleep can reproduce concurrency behavior (tests/integration/test_uow.py)
+.time.sleep can reproduce concurrency behavior (tests/integration/test_uow.py)(time.sleep 可以重现并发行为)
====
[source,python]
----
@@ -813,8 +1050,10 @@ def try_to_allocate(orderid, sku, exceptions):
Then we have our test invoke this slow allocation twice, concurrently, using
threads:
+然后,我们的测试会使用线程同时调用这个慢速分配函数两次:
+
[[data_integrity_test]]
-.An integration test for concurrency behavior (tests/integration/test_uow.py)
+.An integration test for concurrency behavior (tests/integration/test_uow.py)(一个用于测试并发行为的集成测试)
====
[source,python]
----
@@ -858,25 +1097,32 @@ def test_concurrent_updates_to_version_are_not_allowed(postgres_session_factory)
<1> We start two threads that will reliably produce the concurrency behavior we
want: `read1, read2, write1, write2`.
+我们启动两个线程,这将可靠地重现我们想要的并发行为:`read1, read2, write1, write2`。
<2> We assert that the version number has been incremented only once.
+我们断言版本号只增加了一次。
<3> We can also check on the specific exception if we like.
+如果需要,我们还可以检验具体的异常情况。
<4> And we double-check that only one allocation has gotten through.
+我们进一步确认只有一个分配操作成功了。
// TODO: use """ syntax for sql literal above?
==== Enforcing Concurrency Rules by Using Database Transaction [.keep-together]#Isolation Levels#
+通过使用数据库事务隔离级别来强制执行并发规则
((("transactions", "using to enforce concurrency rules")))
((("concurrency", "enforcing rules using database transactions")))
To get the test to pass as it is, we can set the transaction isolation level
on our session:
+为了让测试按预期通过,我们可以在会话上设置事务隔离级别:
+
[[isolation_repeatable_read]]
-.Set isolation level for session (src/allocation/service_layer/unit_of_work.py)
+.Set isolation level for session (src/allocation/service_layer/unit_of_work.py)(为会话设置隔离级别)
====
[source,python]
----
@@ -897,8 +1143,12 @@ TIP: Transaction isolation levels are tricky stuff, so it's worth spending time
[.keep-together]#example#.]
((("PostgreSQL", "documentation for transaction isolation levels")))
((("isolation levels (transaction)")))
+事务隔离级别是比较复杂的内容,因此值得花些时间阅读和理解 https://oreil.ly/5vxJA[Postgres 文档]。脚注:[如果你没有使用 Postgres,
+则需要阅读其他数据库的文档。令人遗憾的是,不同的数据库对事务隔离级别的定义往往差异很大。
+例如,Oracle 的 `SERIALIZABLE` 等同于 Postgres 的 `REPEATABLE READ`,这就是一个[.keep-together]#例子#。]
==== Pessimistic Concurrency Control Example: SELECT FOR UPDATE
+悲观并发控制示例:SELECT FOR UPDATE
((("pessimistic concurrency", "example, SELECT FOR UPDATE")))
((("concurrency", "pessimistic concurrency example, SELECT FOR UPDATE")))
@@ -907,6 +1157,8 @@ There are multiple ways to approach this, but we'll show one. https://oreil.ly/i
produces different behavior; two concurrent transactions will not be allowed to
do a read on the same rows at the same time:
+有多种方法可以实现这一点,但我们将展示其中一种方法。 https://oreil.ly/i8wKL[`SELECT FOR UPDATE`] 会产生不同的行为:两个并发事务将不能同时读取相同的行:
+
((("SQLAlchemy", "using DSL to specify FOR UPDATE")))
`SELECT FOR UPDATE` is a way of picking a row or rows to use as a lock
(although those rows don't have to be the ones you update). If two
@@ -914,11 +1166,16 @@ transactions both try to `SELECT FOR UPDATE` a row at the same time, one will
win, and the other will wait until the lock is released. So this is an example
of pessimistic concurrency control.
+`SELECT FOR UPDATE` 是一种选择一行或多行用作锁的方法(尽管这些行不一定是你要更新的行)。
+如果两个事务同时尝试对同一行执行 `SELECT FOR UPDATE`,其中一个会成功,而另一个则会等待直到锁被释放。因此,这就是一个悲观并发控制的示例。
+
Here's how you can use the SQLAlchemy DSL to specify `FOR UPDATE` at
query time:
+以下是如何使用 SQLAlchemy 的 DSL 在查询时指定 `FOR UPDATE`:
+
[[with_for_update]]
-.SQLAlchemy with_for_update (src/allocation/adapters/repository.py)
+.SQLAlchemy with_for_update (src/allocation/adapters/repository.py)(SQLAlchemy 的 with_for_update)
====
[source,python]
[role="non-head"]
@@ -936,6 +1193,8 @@ query time:
This will have the effect of changing the concurrency pattern from
+这会将并发模式从以下方式改变:
+
[role="skip"]
----
read1, read2, write1, write2(fail)
@@ -953,6 +1212,9 @@ read1, write1, read2, write2(succeed)
Some people refer to this as the "read-modify-write" failure mode.
Read https://oreil.ly/uXeZI["PostgreSQL Anti-Patterns: Read-Modify-Write Cycles"] for a good [.keep-together]#overview#.
+有些人将这种模式称为“读-修改-写”失败模式。阅读 https://oreil.ly/uXeZI["PostgreSQL Anti-Patterns: Read-Modify-Write Cycles"]
+以获得一个很好的 [.keep-together]#概述#。
+
//TODO maybe better diagrams here?
((("data integrity", "testing for", startref="ix_daint")))
@@ -963,9 +1225,14 @@ But if you have a test like the one we've shown, you can specify the behavior
you want and see how it changes. You can also use the test as a basis for
performing some performance experiments.((("aggregates", "testing for data integrity rules", startref="ix_aggtstdi")))
+我们没有足够的时间来详细讨论 `REPEATABLE READ` 和 `SELECT FOR UPDATE` 之间的所有权衡,或者一般情况下乐观锁与悲观锁的对比。
+但如果你有一个像我们展示的那样的测试,你可以指定你想要的行为并观察其变化。你还可以将该测试作为进行一些性能实验的基础。
+((("聚合", "测试数据完整性规则", startref="ix_aggtstdi")))
+
=== Wrap-Up
+总结
((("aggregates", "and consistency boundaries recap")))
Specific choices around concurrency control vary a lot based on business
@@ -975,6 +1242,9 @@ object as being the main entrypoint to some subset of our model, and as being in
charge of enforcing the invariants and business rules that apply across all of
those objects.
+关于并发控制的具体选择因业务环境和存储技术的不同而存在很大差异,但我们希望将本章的重点回归到聚合的概念性思想上:
+我们通过显式建模将一个对象作为模型中某个子集的主要入口,并将其负责强制执行适用于所有这些对象的不变量和业务规则。
+
((("Effective Aggregate Design (Vernon)")))
((("Vernon, Vaughn")))
((("domain driven design (DDD)", "choosing the right aggregate, references on")))
@@ -984,75 +1254,94 @@ We also recommend these three online papers on
https://dddcommunity.org/library/vernon_2011[effective aggregate design]
by Vaughn Vernon (the "red book" author).
+选择合适的聚合是关键,这一决策可能会随着时间的推移而不断重新评估。有关更多内容,你可以查阅多本 DDD(领域驱动设计)相关的书籍。
+我们还推荐阅读 Vaughn Vernon(“红皮书”作者)撰写的关于 https://dddcommunity.org/library/vernon_2011[有效的聚合设计] 的三篇在线论文。
+
((("aggregates", "pros and cons or trade-offs")))
<> has some thoughts on the trade-offs of implementing the Aggregate pattern.
+<> 提供了一些关于实现聚合模式时权衡取舍的思考。
+
[[chapter_07_aggregate_tradoffs]]
[options="header"]
-.Aggregates: the trade-offs
+.Aggregates: the trade-offs(聚合:权衡取舍)
|===
-|Pros|Cons
+|Pros(优点)|Cons(缺点)
a|
* Python might not have "official" public and private methods, but we do have
the underscores convention, because it's often useful to try to indicate what's for
"internal" use and what's for "outside code" to use. Choosing aggregates is
just the next level up: it lets you decide which of your domain model classes
are the public ones, and which aren't.
+_Python_ 可能没有“官方的”公共和私有方法,但我们有下划线的约定,因为尝试指示哪些是供“内部”使用的,哪些是供“外部代码”使用的,
+通常是很有用的。选择聚合就是更高一级的设计:它让你可以决定你的领域模型类中哪些是公共的,哪些不是。
* Modeling our operations around explicit consistency boundaries helps us avoid
performance problems with our ORM.
((("performance", "consistency boundaries and")))
+围绕显式的一致性边界来建模操作,可以帮助我们避免 ORM 的性能问题。
* Putting the aggregate in sole charge of state changes to its subsidiary models
makes the system easier to reason about, and makes it easier to control invariants.
+让聚合全权负责其子模型的状态变更,可以让系统更容易理解,同时也更容易控制不变量。
a|
* Yet another new concept for new developers to take on. Explaining entities versus
value objects was already a mental load; now there's a third type of domain
model object?
+对于新开发者来说,这又是一个需要掌握的新概念。解释实体与值对象之间的区别已经是一种心智负担了,现在居然又多了一种领域模型对象类型?
* Sticking rigidly to the rule that we modify only one aggregate at a time is a
big mental shift.
+严格遵守一次只修改一个聚合的规则是一个很大的思维转变。
* Dealing with eventual consistency between aggregates can be complex.
+处理聚合之间的最终一致性可能会非常复杂。
|===
[role="nobreakinside less_space"]
-.Aggregates and Consistency Boundaries Recap
+.Aggregates and Consistency Boundaries Recap(聚合和一致性边界回顾)
*****************************************************************
((("consistency boundaries", "recap")))
-Aggregates are your entrypoints into the domain model::
+Aggregates are your entrypoints into the domain model(聚合是你进入领域模型的入口点)::
By restricting the number of ways that things can be changed,
we make the system easier to reason about.
+通过限制可以更改事物的方式数量,我们使系统更容易理解。
-Aggregates are in charge of a consistency boundary::
+Aggregates are in charge of a consistency boundary(聚合负责一致性边界)::
An aggregate's job is to be able to manage our business rules
about invariants as they apply to a group of related objects.
It's the aggregate's job to check that the objects within its
remit are consistent with each other and with our rules, and
to reject changes that would break the rules.
+聚合的职责是管理与一组相关对象相关的不变量业务规则。聚合的任务是检查其管辖范围内的对象之间以及它们与我们的规则之间的一致性,
+并拒绝那些会破坏规则的更改。
-Aggregates and concurrency issues go together::
+Aggregates and concurrency issues go together(聚合与并发问题密切相关)::
When thinking about implementing these consistency checks, we
end up thinking about transactions and locks. Choosing the
right aggregate is about performance as well as conceptual
organization of your domain.
((("concurrency", "aggregates and concurrency issues")))
+在考虑实现这些一致性检查时,我们最终会涉及事务和锁的思考。选择合适的聚合不仅关系到性能,还涉及领域概念的组织。
*****************************************************************
[role="pagebreak-before less_space"]
=== Part I Recap
+第一部分回顾
((("component diagram at end of Part One")))
Do you remember <>, the diagram we showed at the
beginning of <> to preview where we were heading?
+你还记得 <> 吗?这是我们在 <> 开头展示的一个图,用来预览我们的学习方向。
+
[role="width-75"]
[[recap_components_diagram]]
-.A component diagram for our app at the end of Part I
+.A component diagram for our app at the end of Part I(第一部分结束时我们应用程序的组件图)
image::images/apwp_0705.png[]
So that's where we are at the end of Part I. What have we achieved? We've
@@ -1064,11 +1353,18 @@ have confidence that our tests will help us to prove the new functionality, and
when new developers join the project, they can read our tests to understand how
things work.
+这就是我们在第一部分结束时所处的位置。我们取得了哪些成就呢?我们已经了解了如何构建由一组高层次单元测试驱动的领域模型。
+我们的测试是活的文档:它们以清晰可读的代码描述了我们系统的行为——那些我们与业务相关方达成一致的规则。当业务需求发生变化时,
+我们有信心相信测试将帮助我们验证新的功能;而当新开发者加入项目时,他们可以阅读我们的测试以了解系统是如何工作的。
+
We've decoupled the infrastructural parts of our system, like the database and
API handlers, so that we can plug them into the outside of our application.
This helps us to keep our codebase well organized and stops us from building a
big ball of mud.
+我们已经将系统的基础设施部分(如数据库和 API 处理程序)解耦,使其能够作为外部组件连接到我们的应用程序。这有助于保持代码库的良好组织,
+防止我们构建出一团混乱的代码结构。
+
((("adapters", "ports-and-adapters inspired patterns")))
((("ports", "ports-and-adapters inspired patterns")))
By applying the dependency inversion principle, and by using
@@ -1077,14 +1373,23 @@ made it possible to do TDD in both high gear and low gear and to maintain a
healthy test pyramid. We can test our system edge to edge, and the need for
integration and end-to-end tests is kept to a minimum.
+通过应用依赖反转原则,并使用类似于端口和适配器(Ports-and-Adapters)模式的设计,如仓储(Repository)和工作单元(Unit of Work),
+我们实现了在高效模式和低效模式下进行测试驱动开发(TDD)的可能性,并维护了一个健康的测试金字塔。我们可以从头到尾测试我们的系统,
+同时将对集成测试和端到端测试的需求降至最低。
+
Lastly, we've talked about the idea of consistency boundaries. We don't want to
lock our entire system whenever we make a change, so we have to choose which
parts are consistent with one another.
+最后,我们讨论了一致性边界的概念。我们不希望在每次进行更改时都锁定整个系统,因此必须选择哪些部分需要彼此保持一致。
+
For a small system, this is everything you need to go and play with the ideas of
domain-driven design. You now have the tools to build database-agnostic domain
models that represent the shared language of your business experts. Hurrah!
+对于一个小型系统来说,这已经是探索领域驱动设计(DDD)理念所需的一切了。你现在拥有了构建与数据库无关的领域模型的工具,
+这些模型能够体现你的业务专家之间的通用语言。芜湖!
+
NOTE: At the risk of laboring the point--we've been at pains to point out that
each pattern comes at a cost. Each layer of indirection has a price in terms
of complexity and duplication in our code and will be confusing to programmers
@@ -1094,7 +1399,12 @@ NOTE: At the risk of laboring the point--we've been at pains to point out that
use Django, and save yourself a lot of bother.
((("CRUD wrapper around a database")))
((("patterns, deciding whether you need to use them")))
+冒着重复强调这一点的风险——我们一直致力于指出,每种模式都伴随着一定的代价。每一层间接抽象都会在代码中带来复杂性和重复性,
+同时也会让从未见过这些模式的程序员感到困惑。如果你的应用本质上只是一个围绕数据库的简单 CRUD 封装,并且在可预见的未来也不会变得比这更复杂,
+_你完全不需要这些模式_。尽管使用 Django 吧,这样可以为自己省去许多麻烦。
In Part II, we'll zoom out and talk about a bigger topic: if aggregates are our
boundary, and we can update only one at a time, how do we model processes that
cross consistency boundaries?
+
+在第二部分,我们将放大视角,讨论一个更大的主题:如果聚合是我们的边界,并且我们一次只能更新一个,那么我们该如何为跨越一致性边界的流程建模?
diff --git a/chapter_08_events_and_message_bus.asciidoc b/chapter_08_events_and_message_bus.asciidoc
index dcbbe761..c0e7b2ac 100644
--- a/chapter_08_events_and_message_bus.asciidoc
+++ b/chapter_08_events_and_message_bus.asciidoc
@@ -1,27 +1,40 @@
[[chapter_08_events_and_message_bus]]
== Events and the Message Bus
+事件与消息总线
((("events and the message bus", id="ix_evntMB")))
So far we've spent a lot of time and energy on a simple problem that we could
easily have solved with Django. You might be asking if the increased testability
and expressiveness are _really_ worth all the effort.
+到目前为止,我们花费了大量时间和精力解决一个可以轻松用Django解决的简单问题。你可能会问,增加的可测试性和表达能力是否 _真的_ 值得这些努力。
+
In practice, though, we find that it's not the obvious features that make a mess
of our codebases: it's the goop around the edge. It's reporting, and permissions,
and workflows that touch a zillion objects.
+然而,在实践中,我们发现并不是那些显而易见的功能让代码库变得混乱,而是边缘部分的杂乱。比如,报告、权限管理,以及涉及无数对象的工作流程。
+
Our example will be a typical notification requirement: when we can't allocate
an order because we're out of stock, we should alert the buying team. They'll
go and fix the problem by buying more stock, and all will be well.
+我们的示例将是一个典型的通知需求:当我们因为缺货而无法分配订单时,我们应该提醒采购团队。他们会通过采购更多的库存来解决问题,一切就迎刃而解了。
+
For a first version, our product owner says we can just send the alert by email.
+对于第一个版本,我们的产品负责人表示可以仅通过电子邮件发送提醒。
+
Let's see how our architecture holds up when we need to plug in some of the
mundane stuff that makes up so much of our systems.
+让我们看看当我们需要引入一些构成系统大部分的琐碎内容时,我们的架构能否经受住考验。
+
We'll start by doing the simplest, most expeditious thing, and talk about
why it's exactly this kind of decision that leads us to the Big Ball of Mud.
+我们将从最简单、最迅速的方法入手,并探讨为什么正是这种决定会将我们引向“大泥球”的困境。
+
((("Message Bus pattern")))
((("Domain Events pattern")))
((("events and the message bus", "events flowing through the system")))
@@ -33,9 +46,13 @@ those events and how to pass them to the message bus, and finally we'll show
how the Unit of Work pattern can be modified to connect the two together elegantly,
as previewed in <>.
+然后,我们将展示如何使用 _领域事件_ 模式将副作用与用例分离开,并且如何使用一个简单的 _消息总线_ 模式基于这些事件触发行为。
+我们会展示一些创建这些事件的选项,以及如何将它们传递给消息总线,最后将展示如何修改工作单元模式以优雅地将两者连接在一起,
+正如在<>中预览的一样。
+
[[message_bus_diagram]]
-.Events flowing through the system
+.Events flowing through the system(流经系统的事件)
image::images/apwp_0801.png[]
// TODO: add before diagram for contrast (?)
@@ -46,6 +63,8 @@ image::images/apwp_0801.png[]
The code for this chapter is in the
chapter_08_events_and_message_bus branch https://oreil.ly/M-JuL[on GitHub]:
+本章的代码位于 `chapter_08_events_and_message_bus` 分支,https://oreil.ly/M-JuL[在GitHub上]:
+
----
git clone https://github.com/cosmicpython/code.git
cd code
@@ -57,6 +76,7 @@ git checkout chapter_07_aggregate
=== Avoiding Making a Mess
+避免制造混乱
((("web controllers, sending email alerts via, avoiding")))
((("events and the message bus", "sending email alerts when out of stock", id="ix_evntMBeml")))
@@ -64,14 +84,19 @@ git checkout chapter_07_aggregate
So. Email alerts when we run out of stock. When we have new requirements like ones that _really_ have nothing to do with the core domain, it's all too easy to
start dumping these things into our web controllers.
+那么,当我们库存不足时发送电子邮件提醒。当我们遇到类似这样的新需求时,尤其是那些与核心领域 _并没有真正关系_ 的需求,很容易就会开始把这些东西堆到我们的Web控制器里。
+
==== First, Let's Avoid Making a Mess of Our Web Controllers
+首先,让我们避免把我们的 Web 控制器搞得一团糟
((("events and the message bus", "sending email alerts when out of stock", "avoiding messing up web controllers")))
As a one-off hack, this _might_ be OK:
+作为一个一次性的临时解决方案,这 _也许_ 还可以接受:
+
[[email_in_flask]]
-.Just whack it in the endpoint—what could go wrong? (src/allocation/entrypoints/flask_app.py)
+.Just whack it in the endpoint—what could go wrong? (src/allocation/entrypoints/flask_app.py)(直接把它塞到端点里——能出什么问题呢?)
====
[source,python]
[role="skip"]
@@ -102,8 +127,11 @@ def allocate_endpoint():
like this. Sending email isn't the job of our HTTP layer, and we'd like to be
able to unit test this new feature.
+...但不难看出,通过像这样打补丁,我们很快就可能陷入混乱。发送电子邮件并不是我们HTTP层的职责,而且我们希望能够对这个新功能进行单元测试。
+
==== And Let's Not Make a Mess of Our Model Either
+同时也不要让我们的模型陷入混乱
((("domain model", "email sending code in, avoiding")))
((("events and the message bus", "sending email alerts when out of stock", "avoiding messing up domain model")))
@@ -111,8 +139,10 @@ Assuming we don't want to put this code into our web controllers, because
we want them to be as thin as possible, we may look at putting it right at
the source, in the model:
+假设我们不想把这段代码放在我们的 Web 控制器中,因为我们希望它们尽可能简洁,那么我们可能会考虑直接把它放到源头——模型中:
+
[[email_in_model]]
-.Email-sending code in our model isn't lovely either (src/allocation/domain/model.py)
+.Email-sending code in our model isn't lovely either (src/allocation/domain/model.py)(我们模型中的邮件发送代码同样也不够优雅)
====
[source,python]
[role="non-head"]
@@ -130,12 +160,17 @@ the source, in the model:
But that's even worse! We don't want our model to have any dependencies on
infrastructure concerns like `email.send_mail`.
+但这就更糟糕了!我们不希望我们的模型对诸如 `email.send_mail` 这样的基础设施问题有任何依赖。
+
This email-sending thing is unwelcome _goop_ messing up the nice clean flow
of our system. What we'd like is to keep our domain model focused on the rule
"You can't allocate more stuff than is actually available."
+这个发送电子邮件的功能是不受欢迎的 _杂乱_,它破坏了我们系统的干净流畅。我们希望的是,让我们的领域模型专注于规则:“你不能分配超过实际可用的库存。”
+
==== Or the Service Layer!
+或者用服务层!
((("service layer", "sending email alerts when out of stock, avoiding")))
((("events and the message bus", "sending email alerts when out of stock", "out of place in the service layer")))
@@ -143,11 +178,15 @@ The requirement "Try to allocate some stock, and send an email if it fails" is
an example of workflow orchestration: it's a set of steps that the system has
to follow to [.keep-together]#achieve# a goal.
+需求“尝试分配一些库存,如果失败则发送一封邮件”是一个工作流编排的示例:它是一组系统必须遵循以 [.keep-together]#实现# 目标的步骤。
+
We've written a service layer to manage orchestration for us, but even here
the feature feels out of place:
+我们已经编写了一个服务层来为我们管理编排,但即使在这里,这个功能也显得格格不入:
+
[[email_in_services]]
-.And in the service layer, it's out of place (src/allocation/service_layer/services.py)
+.And in the service layer, it's out of place (src/allocation/service_layer/services.py)(而在服务层中,它显得格格不入)
====
[source,python]
[role="non-head"]
@@ -177,7 +216,10 @@ Catching an exception and reraising it? It could be worse, but it's
definitely making us unhappy. Why is it so hard to find a suitable home for
this code?
+捕获一个异常然后重新抛出?这可能还不算最糟,但它确实让我们感到不快。为什么要为这段代码找到一个合适的归宿会这么困难呢?
+
=== Single Responsibility Principle
+单一职责原则
((("single responsibility principle (SRP)")))
((("events and the message bus", "sending email alerts when out of stock", "violating the single responsibility principle")))
@@ -187,13 +229,21 @@ Our use case is allocation. Our endpoint, service function, and domain methods
are all called [.keep-together]#`allocate`#, not
`allocate_and_send_mail_if_out_of_stock`.
+实际上,这是违反了__单一职责原则__(SRP)。脚注:[
+这个原则是 https://oreil.ly/AIdSD[SOLID]中的 _S_。]
+我们的用例是分配。我们的端点、服务函数和领域方法都被称为 [.keep-together]#`allocate`#,而不是`allocate_and_send_mail_if_out_of_stock`。
+
TIP: Rule of thumb: if you can't describe what your function does without using
words like "then" or "and," you might be violating the SRP.
+经验法则:如果你在描述函数的作用时无法避免使用“然后”或“和”这样的词语,那么你可能违反了单一职责原则(SRP)。
One formulation of the SRP is that each class should have only a single reason
to change. When we switch from email to SMS, we shouldn't have to update our
`allocate()` function, because that's clearly a separate responsibility.
+单一职责原则(SRP)的一种表述是,每个类应该只有一个导致其变化的原因。当我们从电子邮件切换到短信时,
+不应该需要更新我们的`allocate()`函数,因为这显然是一个独立的职责。
+
((("choreography")))
((("orchestration", "changing to choreography")))
To solve the problem, we're going to split the orchestration
@@ -205,31 +255,46 @@ of sending an alert belongs elsewhere. We should be able to turn this feature
on or off, or to switch to SMS notifications instead, without needing to change
the rules of our domain model.
+为了解决这个问题,我们准备将编排分解为独立的步骤,这样不同的关注点就不会混杂在一起。脚注:[
+我们的技术审阅者Ed Jung喜欢说,当你从命令式流程控制切换到基于事件的流程控制时,你就将 _编排_ 转换成了 _协作_。]
+领域模型的职责是知道我们缺货了,但发送警报的责任应该属于其他地方。我们应该能够开启或关闭此功能,或者切换到短信通知,而不需要修改领域模型的规则。
+
We'd also like to keep the service layer free of implementation details. We
want to apply the dependency inversion principle to notifications so that our
service layer depends on an abstraction, in the same way as we avoid depending
on the database by using a unit of work.
+我们还希望让服务层不包含实现细节。我们希望将依赖反转原则应用于通知,
+这样我们的服务层就依赖于一个抽象,就像我们通过使用工作单元(unit of work)来避免依赖数据库一样。
+
=== All Aboard the Message Bus!
+全员登上消息总线!
The patterns we're going to introduce here are _Domain Events_ and the _Message Bus_.
We can implement them in a few ways, so we'll show a couple before settling on
the one we like most.
+我们将在这里介绍的模式是 _领域事件(Domain Events)_ 和 _消息总线(Message Bus)_。它们可以通过几种方式实现,
+因此我们会先展示几个实现方式,然后再确定我们最喜欢的那一个。
+
// TODO: at this point the message bus is really just a dispatcher. could also mention
// pubsub. once we get a queue, it's more justifiably a bus
==== The Model Records Events
+模型记录事件
((("events and the message bus", "recording events")))
First, rather than being concerned about emails, our model will be in charge of
recording _events_—facts about things that have happened. We'll use a message
bus to respond to events and invoke a new operation.
+首先,我们的模型不再关注电子邮件,而是负责记录 _事件(events)_ ——即已经发生的事实。我们将使用消息总线来响应这些事件并触发新的操作。
+
==== Events Are Simple Dataclasses
+事件是简单的数据类
((("dataclasses", "events")))
((("events and the message bus", "events as simple dataclasses")))
@@ -237,13 +302,18 @@ An _event_ is a kind of _value object_. Events don't have any behavior, because
they're pure data structures. We always name events in the language of the
domain, and we think of them as part of our domain model.
+_事件_ 是一种 _值对象_。事件没有任何行为,因为它们是纯数据结构。我们总是用领域的语言为事件命名,并将它们视为领域模型的一部分。
+
We could store them in _model.py_, but we may as well keep them in their own file
(this might be a good time to consider refactoring out a directory called
_domain_ so that we have _domain/model.py_ and _domain/events.py_):
+我们可以将它们存储在 _model.py_ 中,但不妨将它们放在单独的文件中(此时,可以考虑重构出一个名为 _domain_ 的目录,
+这样我们就有了 _domain/model.py_ 和 _domain/events.py_):
+
[role="nobreakinside less_space"]
[[events_dot_py]]
-.Event classes (src/allocation/domain/events.py)
+.Event classes (src/allocation/domain/events.py)(事件类)
====
[source,python]
----
@@ -264,24 +334,31 @@ class OutOfStock(Event): #<2>
<1> Once we have a number of events, we'll find it useful to have a parent
class that can store common attributes. It's also useful for type
hints in our message bus, as you'll see shortly.
+当我们有多个事件时,会发现拥有一个父类来存储通用属性是很有用的。此外,这对于在消息总线中的类型提示也很有帮助,稍后你会看到这一点。
<2> `dataclasses` are great for domain events too.
+`dataclasses` 对于领域事件也非常出色。
==== The Model Raises Events
+模型触发事件
((("events and the message bus", "domain model raising events")))
((("domain model", "raising events")))
When our domain model records a fact that happened, we say it _raises_ an event.
+当我们的领域模型记录一个发生的事实时,我们称其为 _触发(raise)_ 一个事件。
+
((("aggregates", "testing Product object to raise events")))
Here's what it will look like from the outside; if we ask `Product` to allocate
but it can't, it should _raise_ an event:
+从外部来看,它会是这样的:如果我们请求 `Product` 分配库存但失败了,它应该 _触发_ 一个事件:
+
[[test_raising_event]]
-.Test our aggregate to raise events (tests/unit/test_product.py)
+.Test our aggregate to raise events (tests/unit/test_product.py)(测试我们的聚合以触发事件)
====
[source,python]
----
@@ -298,12 +375,15 @@ def test_records_out_of_stock_event_if_cannot_allocate():
<1> Our aggregate will expose a new attribute called `.events` that will contain
a list of facts about what has happened, in the form of `Event` objects.
+我们的聚合将公开一个名为 `.events` 的新属性,该属性将以 `Event` 对象的形式包含一个关于已发生事实的列表。
Here's what the model looks like on the inside:
+以下是模型的内部实现:
+
[[domain_event]]
-.The model raises a domain event (src/allocation/domain/model.py)
+.The model raises a domain event (src/allocation/domain/model.py)(模型触发了一个领域事件)
====
[source,python]
[role="non-head"]
@@ -326,12 +406,15 @@ class Product:
====
<1> Here's our new `.events` attribute in use.
+以下是我们使用新的 `.events` 属性的示例。
<2> Rather than invoking some email-sending code directly, we record those
events at the place they occur, using only the language of the domain.
+我们并没有直接调用发送电子邮件的代码,而是在事件发生的地方记录这些事件,仅使用领域的语言来描述。
<3> We're also going to stop raising an exception for the out-of-stock
case. The event will do the job the exception was doing.
+我们还将停止在缺货情况下抛出异常。事件将完成之前由异常承担的任务。
@@ -343,10 +426,13 @@ NOTE: We're actually addressing a code smell we had until now, which is that we
confusing to have to reason about events and exceptions together.
((("control flow, using exceptions for")))
((("exceptions", "using for control flow")))
+实际上,我们正在解决之前存在的一种代码异味,也就是我们 https://oreil.ly/IQB51[用异常来控制流程]。通常来说,如果你正在实现领域事件,
+不要通过抛出异常来描述相同的领域概念。正如你稍后会在处理工作单元模式中的事件时看到的那样,同时考虑事件和异常是令人困惑的。
==== The Message Bus Maps Events to Handlers
+消息总线将事件映射到处理器
((("message bus", "mapping events to handlers")))
((("events and the message bus", "message bus mapping events to handlers")))
@@ -356,8 +442,11 @@ handler function." In other words, it's a simple publish-subscribe system.
Handlers are _subscribed_ to receive events, which we publish to the bus. It
sounds harder than it is, and we usually implement it with a dict:
+消息总线的基本作用是,“当我看到这个事件时,我应该调用以下处理器函数。” 换句话说,它是一个简单的发布-订阅系统。处理器 _订阅_ 接收事件,
+而我们将事件发布到总线中。这听起来比实际要复杂,而我们通常用一个字典来实现它:
+
[[messagebus]]
-.Simple message bus (src/allocation/service_layer/messagebus.py)
+.Simple message bus (src/allocation/service_layer/messagebus.py)(简单消息总线)
====
[source,python]
----
@@ -386,16 +475,21 @@ NOTE: Note that the message bus as implemented doesn't give us concurrency becau
"recipe" for how to run each use case is written in a single place. See the
following sidebar.
((("concurrency", "not provided by message bus implementation")))
+请注意,目前实现的消息总线并不支持并发,因为一次只能运行一个处理器。我们的目标并不是支持并行线程,而是从概念上分离任务,
+并尽可能让每个工作单元保持小巧。这有助于我们理解代码库,因为每个用例的“运行步骤”都集中记录在一个地方。请参阅以下侧边栏。
[role="nobreakinside less_space"]
[[celery_sidebar]]
-.Is This Like Celery?
+.Is This Like Celery?(这像 Celery 吗?)
*******************************************************************************
((("message bus", "Celery and")))
_Celery_ is a popular tool in the Python world for deferring self-contained
chunks of work to an asynchronous task queue.((("Celery tool"))) The message bus we're
presenting here is very different, so the short answer to the above question is no; our message bus
has more in common with an Express.js app, a UI event loop, or an actor framework.
+
+_Celery_ 是 _Python_ 领域中一个流行的工具,用于将独立的工作块推送到异步任务队列中。我们在这里介绍的消息总线与它非常不同,
+所以对于上面问题的简短回答是“不”;我们的消息总线更类似于 Express.js 应用程序、UI 事件循环或 actor 框架。
// TODO: this "more in common with" line is not super-helpful atm. maybe onclick callbacks in js would be a more helpful example
((("external events")))
@@ -410,6 +504,11 @@ across units of work within a single process/service can be extended across
multiple processes--which may be different containers within the same
service, or totally different microservices.
+如果你确实有将工作从主线程移出的需求,你仍然可以使用我们基于事件的比喻,不过我们建议你为此使用 _外部事件(external event)_。
+关于这一点,在<>中有更多讨论,但关键在于,如果你实现了一种将事件持久化到集中存储的方法,
+就可以让其他容器或其他微服务订阅这些事件。然后,那种在单个进程/服务内使用事件来分离工作单元间职责的概念,
+就可以扩展到多个进程中——这些进程可以是同一服务中的不同容器,也可以是完全不同的微服务。
+
If you follow us in this approach, your API for distributing tasks
is your event [.keep-together]##classes—##or a JSON representation of them. This allows
you a lot of flexibility in who you distribute tasks to; they need not
@@ -417,10 +516,15 @@ necessarily be Python services. Celery's API for distributing tasks is
essentially "function name plus arguments," which is more restrictive,
and Python-only.
+如果你按照我们的这种方法,你用于分发任务的API就是你的事件 [.keep-together]##类## ——或者是它们的JSON表示形式。
+这为你在分发任务的对象上提供了很大的灵活性;这些对象不一定非得是 _Python_ 服务。而 _Celery_ 用于分发任务的API本质上是“函数名称加参数”,
+这种方法更具限制性,并且仅限于 _Python_。
+
*******************************************************************************
=== Option 1: The Service Layer Takes Events from the Model and Puts Them on the Message Bus
+选项 1:服务层从模型中获取事件并将其放置到消息总线上
((("domain model", "events from, passing to message bus in service layer")))
((("message bus", "service layer with explicit message bus")))
@@ -432,10 +536,15 @@ handlers whenever an event happens. Now all we need is to connect the two. We
need something to catch events from the model and pass them to the message
bus--the _publishing_ step.
+我们的领域模型触发事件,而我们的消息总线将在事件发生时调用相应的处理器。现在我们只需要将两者连接起来。
+我们需要某种机制来捕获模型中的事件并将其传递到消息总线——这是 _发布_ 的步骤。
+
The simplest way to do this is by adding some code into our service layer:
+最简单的方式是在我们的服务层中添加一些代码:
+
[[service_talks_to_messagebus]]
-.The service layer with an explicit message bus (src/allocation/service_layer/services.py)
+.The service layer with an explicit message bus (src/allocation/service_layer/services.py)(具有显式消息总线的服务层)
====
[source,python]
[role="non-head"]
@@ -463,18 +572,23 @@ def allocate(
<1> We keep the `try/finally` from our ugly earlier implementation (we haven't
gotten rid of _all_ exceptions yet, just `OutOfStock`).
+我们保留了之前丑陋实现中的 `try/finally`(我们还没有完全去掉 _所有_ 异常,只是移除了 `OutOfStock`)。
<2> But now, instead of depending directly on an email infrastructure,
the service layer is just in charge of passing events from the model
up to the message bus.
+但现在,服务层不再直接依赖于电子邮件基础设施,而只是负责将模型中的事件传递到消息总线上。
That already avoids some of the ugliness that we had in our naive
implementation, and we have several systems that work like this one, in which the
service layer explicitly collects events from aggregates and passes them to
the message bus.
+这已经避免了我们在原始实现中遇到的一些丑陋之处,而且我们有多个类似的系统,其中服务层明确地从聚合中收集事件并将它们传递到消息总线。
+
=== Option 2: The Service Layer Raises Its Own Events
+选项 2:服务层触发自己的事件
((("service layer", "raising its own events")))
((("events and the message bus", "service layer raising its own events")))
@@ -483,9 +597,11 @@ Another variant on this that we've used is to have the service layer
in charge of creating and raising events directly, rather than having them
raised by the domain model:
+我们使用过的另一种变体是让服务层直接负责创建和触发事件,而不是由领域模型触发事件:
+
[[service_layer_raises_events]]
-.Service layer calls messagebus.handle directly (src/allocation/service_layer/services.py)
+.Service layer calls messagebus.handle directly (src/allocation/service_layer/services.py)(服务层直接调用 messagebus.handle)
====
[source,python]
[role="skip"]
@@ -512,14 +628,20 @@ def allocate(
and it's easier to reason about: we always commit unless something goes
wrong. Committing when we haven't changed anything is safe and keeps the
code uncluttered.
+和以前一样,即使分配失败我们也会提交,因为这样代码更简单且更易于理解:除非出问题,否则我们总是提交。
+当没有更改任何内容时提交是安全的,同时也能保持代码简洁。
Again, we have applications in production that implement the pattern in this
way. What works for you will depend on the particular trade-offs you face, but
we'd like to show you what we think is the most elegant solution, in which we
put the unit of work in charge of collecting and raising events.
+同样,我们也有一些生产中的应用程序是以这种方式实现该模式的。对你来说,哪种方法有效取决于你所面临的具体权衡,
+但我们想向你展示我们认为最优雅的解决方案,其中我们将工作单元负责收集和触发事件。
+
=== Option 3: The UoW Publishes Events to the Message Bus
+选项 3:工作单元将事件发布到消息总线
((("message bus", "Unit of Work publishing events to")))
((("events and the message bus", "UoW publishes events to message bus")))
@@ -528,9 +650,12 @@ The UoW already has a `try/finally`, and it knows about all the aggregates
currently in play because it provides access to the repository. So it's
a good place to spot events and pass them to the message bus:
+工作单元已经有了一个 `try/finally`,并且它了解当前正在使用的所有聚合,因为它提供了对仓储的访问。
+因此,它是捕捉事件并将它们传递到消息总线的一个好位置:
+
[[uow_with_messagebus]]
-.The UoW meets the message bus (src/allocation/service_layer/unit_of_work.py)
+.The UoW meets the message bus (src/allocation/service_layer/unit_of_work.py)(工作单元与消息总线相遇)
====
[source,python]
----
@@ -563,24 +688,28 @@ class SqlAlchemyUnitOfWork(AbstractUnitOfWork):
<1> We'll change our commit method to require a private `._commit()`
method from subclasses.
+我们将修改提交方法,使其需要子类实现一个私有的 `._commit()` 方法。
<2> After committing, we run through all the objects that our
repository has seen and pass their events to the message bus.
+在提交之后,我们会遍历仓储中所有被访问过的对象,并将它们的事件传递到消息总线。
<3> That relies on the repository keeping track of aggregates that have been loaded
using a new attribute, `.seen`, as you'll see in the next listing.
((("repositories", "repository keeping track of aggregates passing through it")))
((("aggregates", "repository keeping track of aggregates passing through it")))
+这依赖于仓储通过一个新属性 `.seen` 来跟踪已加载的聚合对象,正如你将在接下来的代码示例中看到的。
NOTE: Are you wondering what happens if one of the
handlers fails? We'll discuss error handling in detail in <>.
+你是否在想,如果某个处理器失败会发生什么?我们将在 <> 中详细讨论错误处理。
//IDEA: could change ._commit() to requiring super().commit()
[[repository_tracks_seen]]
-.Repository tracks aggregates that pass through it (src/allocation/adapters/repository.py)
+.Repository tracks aggregates that pass through it (src/allocation/adapters/repository.py)(仓储跟踪通过它的聚合)
====
[source,python]
----
@@ -625,26 +754,34 @@ class SqlAlchemyRepository(AbstractRepository):
We use a `set` called `.seen` to store them. That means our implementations
need to call +++super().__init__()+++.
((("super function")))
+为了让工作单元能够发布新的事件,它需要能够从仓储中获取出在哪个 `Product` 对象在本次会话中被使用过。
+我们使用一个名为 `.seen` 的 `set` 来存储这些对象。这意味着我们的实现需要调用 +++super().__init__()+++。
<2> The parent `add()` method adds things to `.seen`, and now requires subclasses
to implement `._add()`.
+父类的 `add()` 方法会将对象添加到 `.seen` 中,并且现在要求子类实现 `._add()` 方法。
<3> Similarly, `.get()` delegates to a `._get()` function, to be implemented by
subclasses, in order to capture objects seen.
+类似地,`.get()` 委托给一个 `._get()` 函数,由子类实现,以便捕获被访问过的对象。
NOTE: The use of pass:[._underscorey()] methods and subclassing is definitely not
the only way you could implement these patterns. Have a go at the
<> in this chapter and experiment
with some alternatives.
+使用 pass:[._underscorey()] 方法和子类化绝对不是实现这些模式的唯一方法。
+试着完成本章中的 <>,并尝试一些替代方案。
After the UoW and repository collaborate in this way to automatically keep
track of live objects and process their events, the service layer can be
totally free of event-handling concerns:
((("service layer", "totally free of event handling concerns")))
+在工作单元和仓储以这种方式协作,自动跟踪活动对象并处理它们的事件之后,服务层就可以完全摆脱事件处理的事务:
+
[[services_clean]]
-.Service layer is clean again (src/allocation/service_layer/services.py)
+.Service layer is clean again (src/allocation/service_layer/services.py)(服务层再次变得简洁)
====
[source,python]
----
@@ -671,9 +808,11 @@ We do also have to remember to change the fakes in the service layer and make th
call `super()` in the right places, and to implement underscorey methods, but the
changes are minimal:
+我们还需要记住修改服务层中的伪对象,确保在正确的位置调用 `super()`,并实现那些以下划线开头的方法,不过这些更改是很小的:
+
[[services_tests_ugly_fake_messagebus]]
-.Service-layer fakes need tweaking (tests/unit/test_services.py)
+.Service-layer fakes need tweaking (tests/unit/test_services.py)(服务层的伪对象需要调整)
====
[source,python]
----
@@ -701,7 +840,7 @@ class FakeUnitOfWork(unit_of_work.AbstractUnitOfWork):
[role="nobreakinside less_space"]
[[get_rid_of_commit]]
-.Exercise for the Reader
+.Exercise for the Reader(读者练习)
******************************************************************************
((("inheritance, avoiding use of with wrapper class")))
@@ -714,11 +853,16 @@ Harry around the head with a plushie snake"? Hey, our code listings are
only meant to be examples, not the perfect solution! Why not go see if you
can do better?
+你是否觉得所有那些 `._add()` 和 `._commit()` 方法“超级恶心”?正如我们尊敬的技术审阅者 Hynek 所说的那样,
+它是否“让你想拿一条软绵绵的玩具蛇去揍 Harry 一顿”?嘿,我们的代码示例仅仅是为了演示,而不是完美的解决方案!为什么不去看看你是否能做得更好呢?
+
One _composition over inheritance_ way to go would be to implement a
wrapper class:
+一种采用 _组合优于继承_ 的方式是实现一个包装类:
+
[[tracking_repo_wrapper]]
-.A wrapper adds functionality and then delegates (src/adapters/repository.py)
+.A wrapper adds functionality and then delegates (src/adapters/repository.py)(一个包装器添加了功能后再进行委托)
====
[source,python]
[role="skip"]
@@ -744,15 +888,21 @@ class TrackingRepository:
<1> By wrapping the repository, we can call the actual `.add()`
and `.get()` methods, avoiding weird underscorey methods.
+通过包装仓储,我们可以调用实际的 `.add()` 和 `.get()` 方法,从而避免使用那些奇怪的以下划线开头的方法。
((("Unit of Work pattern", "getting rid of underscorey methods in UoW class")))
See if you can apply a similar pattern to our UoW class in
order to get rid of those Java-y `_commit()` methods too. You can find the code
on https://github.com/cosmicpython/code/tree/chapter_08_events_and_message_bus_exercise[GitHub].
+试试看能否将类似的模式应用到我们的工作单元类中,从而去掉那些有点像 Java 风格的 `_commit()` 方法。
+你可以在 https://github.com/cosmicpython/code/tree/chapter_08_events_and_message_bus_exercise[GitHub] 找到对应的代码。
+
((("abstract base classes (ABCs)", "switching to typing.Protocol")))
Switching all the ABCs to `typing.Protocol` is a good way to force yourself to
avoid using inheritance. Let us know if you come up with something nice!
+
+将所有的抽象基类(ABCs)切换为 `typing.Protocol` 是一个很好的方法,可以迫使你避免使用继承。如果你想出了一些不错的方案,请告诉我们!
******************************************************************************
You might be starting to worry that maintaining these fakes is going to be a
@@ -761,36 +911,51 @@ it's not a lot of work. Once your project is up and running, the interface for
your repository and UoW abstractions really don't change much. And if you're
using ABCs, they'll help remind you when things get out of sync.
+你可能开始担心维护这些伪对象(fakes)会成为一个维护负担。毫无疑问,这确实需要一些工作,但根据我们的经验,这并不会耗费太多精力。
+一旦你的项目启动并运行起来,仓储和工作单元抽象的接口实际上变化不大。而且,如果你使用抽象基类(ABCs),它们会在接口不同步时提醒你。
+
=== Wrap-Up
+总结
Domain events give us a way to handle workflows in our system. We often find,
listening to our domain experts, that they express requirements in a causal or
temporal way—for example, "When we try to allocate stock but there's none
available, then we should send an email to the buying team."
+领域事件为我们提供了一种方式来处理系统中的工作流。我们经常发现,倾听领域专家时,他们会以因果或时间顺序的方式表达需求——例如,
+“当我们尝试分配库存但没有库存可用时,我们应该向采购团队发送一封电子邮件。”
+
The magic words "When X, then Y" often tell us about an event that we can make
concrete in our system. Treating events as first-class things in our model helps
us make our code more testable and observable, and it helps isolate concerns.
+“当 X,然后 Y”这样的魔法词语通常暗示我们可以在系统中实现的一个事件。在模型中将事件视为一等公民有助于我们使代码更加可测试和可观察,
+同时也有助于隔离关注点。
+
((("message bus", "pros and cons or trade-offs")))
((("events and the message bus", "pros and cons or trade-offs")))
And <> shows the trade-offs as we
see them.
+而 <> 展示了我们所看到的权衡。
+
[[chapter_08_events_and_message_bus_tradeoffs]]
[options="header"]
-.Domain events: the trade-offs
+.Domain events: the trade-offs(领域事件:权衡分析)
|===
-|Pros|Cons
+|Pros(优点)|Cons(缺点)
a|
* A message bus gives us a nice way to separate responsibilities when we have
to take multiple actions in response to a request.
+当我们需要对一个请求采取多个动作时,消息总线为我们提供了一种很好的方式来分离职责。
* Event handlers are nicely decoupled from the "core" application logic,
making it easy to change their implementation later.
+事件处理器与“核心”应用逻辑很好地解耦,这使得以后更改其实现变得容易。
* Domain events are a great way to model the real world, and we can use them
as part of our business language when modeling with stakeholders.
+领域事件是建模现实世界的一种绝佳方式,在与利益相关者进行建模时,我们可以将它们作为业务语言的一部分使用。
a|
@@ -798,6 +963,8 @@ a|
in which the unit of work raises events for us is _neat_ but also magic. It's not
obvious when we call `commit` that we're also going to go and send email to
people.
+消息总线是一个需要额外理解的组件;让工作单元为我们触发事件的实现方式虽然很 _巧妙_,但也有些“魔法”感。当我们调用 `commit` 时,
+并不直观地让人联想到我们还会去给人们发送电子邮件。
* What's more, that hidden event-handling code executes _synchronously_,
meaning your service-layer function
@@ -805,15 +972,19 @@ a|
could cause unexpected performance problems in your web endpoints
(adding asynchronous processing is possible but makes things even _more_ confusing).
((("synchronous execution of event-handling code")))
+此外,这些隐藏的事件处理代码是 _同步_ 执行的,这意味着你的服务层函数在任何事件的所有处理器完成之前都不会结束。
+这可能会在你的 Web 端点中引发意想不到的性能问题(添加异步处理是可能的,但会让事情变得更加 _复杂_)。
* More generally, event-driven workflows can be confusing because after things
are split across a chain of multiple handlers, there is no single place
in the system where you can understand how a request will be fulfilled.
+更普遍地说,事件驱动的工作流可能会令人困惑,因为当处理被分散到多个处理器链中后,系统中就没有一个单一的位置可以让你清楚地了解一个请求是如何被完成的。
* You also open yourself up to the possibility of circular dependencies between your
event handlers, and infinite loops.
((("dependencies", "circular dependencies between event handlers")))
((("events and the message bus", startref="ix_evntMB")))
+你还可能会面临事件处理器之间出现循环依赖和无限循环的风险。
a|
|===
@@ -825,43 +996,55 @@ boundaries where we guarantee consistency. People often ask, "What
should I do if I need to change multiple aggregates as part of a request?" Now
we have the tools we need to answer that question.
+不过,事件的用途远不限于发送电子邮件。在 <> 中,我们花费了大量时间来说服你应该定义聚合,
+或者说定义那些我们可以保证一致性的边界。人们经常会问,“如果我需要在一个请求中修改多个聚合,我该怎么办?” 现在我们有了回答这个问题所需的工具。
+
If we have two things that can be transactionally isolated (e.g., an order and a
[.keep-together]#product#), then we can make them _eventually consistent_ by using events. When an
order is canceled, we should find the products that were allocated to it
and remove the [.keep-together]#allocations#.
+如果我们有两个可以在事务上隔离的对象(例如,一个订单和一个 [.keep-together]#产品#),那么我们可以通过使用事件使它们 _最终一致_。
+当一个订单被取消时,我们应该找到分配给它的产品并移除这些 [.keep-together]#分配#。
+
[role="nobreakinside less_space"]
-.Domain Events and the Message Bus Recap
+.Domain Events and the Message Bus Recap(领域事件和消息总线回顾)
*****************************************************************
((("events and the message bus", "domain events and message bus recap")))
((("message bus", "recap")))
-Events can help with the single responsibility principle::
+Events can help with the single responsibility principle(事件可以帮助贯彻单一职责原则)::
Code gets tangled up when we mix multiple concerns in one place. Events can
help us to keep things tidy by separating primary use cases from secondary
ones.
We also use events for communicating between aggregates so that we don't
need to run long-running transactions that lock against multiple tables.
+当我们将多个关注点混杂在一起时,代码就会变得复杂。事件可以通过将主要用例与次要用例分离来帮助我们保持代码简洁。
+我们还使用事件在聚合之间进行通信,这样就不需要运行会锁定多个表的长时间事务。
-A message bus routes messages to handlers::
+A message bus routes messages to handlers(消息总线将消息路由到处理器)::
You can think of a message bus as a dict that maps from events to their
consumers. It doesn't "know" anything about the meaning of events; it's just
a piece of dumb infrastructure for getting messages around the system.
+你可以将消息总线看作一个从事件映射到其消费者的字典。它并不“了解”事件的含义;它只是一个将消息在系统中分发的简单基础设施。
-Option 1: Service layer raises events and passes them to message bus::
+Option 1: Service layer raises events and passes them to message bus(选项 1:服务层触发事件并将其传递到消息总线)::
The simplest way to start using events in your system is to raise them from
handlers by calling `bus.handle(some_new_event)` after you commit your
unit of work.
((("service layer", "raising events and passing them to message bus")))
+在系统中开始使用事件的最简单方法是从处理器中触发它们,即在提交工作单元后调用 `bus.handle(some_new_event)`。
-Option 2: Domain model raises events, service layer passes them to message bus::
+Option 2: Domain model raises events, service layer passes them to message bus(选项 2:领域模型触发事件,服务层将它们传递到消息总线)::
The logic about when to raise an event really should live with the model, so
we can improve our system's design and testability by raising events from
the domain model. It's easy for our handlers to collect events off the model
objects after `commit` and pass them to the bus.
((("domain model", "raising events and service layer passing them to message bus")))
+关于何时触发事件的逻辑确实应该存在于模型中,因此通过从领域模型触发事件,我们可以改进系统的设计和测试性。在 `commit` 之后,
+处理器可以很容易地从模型对象中收集事件并将它们传递到消息总线。
-Option 3: UoW collects events from aggregates and passes them to message bus::
+Option 3: UoW collects events from aggregates and passes them to message bus(选项 3:工作单元从聚合中收集事件并将它们传递到消息总线)::
Adding `bus.handle(aggregate.events)` to every handler is annoying, so we
can tidy up by making our unit of work responsible for raising events that
were raised by loaded objects.
@@ -869,8 +1052,12 @@ Option 3: UoW collects events from aggregates and passes them to message bus::
and easy to use once it's set up.
((("aggregates", "UoW collecting events from and passing them to message bus")))
((("Unit of Work pattern", "UoW collecting events from aggregates and passing them to message bus")))
+在每个处理器中添加 `bus.handle(aggregate.events)` 会很繁琐,因此我们可以通过让工作单元负责触发由已加载对象触发的事件来简化流程。
+虽然这是最复杂的设计,并且可能依赖于 ORM 的一些“魔法”,但一旦设置完成,它就会非常简洁且易于使用。
*****************************************************************
In <>, we'll look at this idea in more
detail as we build a more complex workflow with our new message bus.
+
+在 <> 中,我们将更详细地探讨这个想法,并使用我们的新消息总线构建一个更复杂的工作流。
diff --git a/chapter_09_all_messagebus.asciidoc b/chapter_09_all_messagebus.asciidoc
index 0ef9a65d..3526f217 100644
--- a/chapter_09_all_messagebus.asciidoc
+++ b/chapter_09_all_messagebus.asciidoc
@@ -1,5 +1,6 @@
[[chapter_09_all_messagebus]]
== Going to Town on the Message Bus
+大展身手应用消息总线
((("events and the message bus", "transforming our app into message processor", id="ix_evntMBMP")))
((("message bus", "before, message buse as optional add-on")))
@@ -8,8 +9,11 @@ structure of our application. We'll move from the current state in
<>, where events are an optional
side effect...
+在本章中,我们将使事件成为应用程序内部结构中更为基础的组成部分。我们将从 <> 的当前状态开始,
+在该状态下,事件只是一个可选的副作用...
+
[[maps_chapter_08_before]]
-.Before: the message bus is an optional add-on
+.Before: the message bus is an optional add-on(之前:消息总线是一个可选的附加功能)
image::images/apwp_0901.png[]
((("message bus", "now the main entrypoint to service layer")))
@@ -18,8 +22,11 @@ image::images/apwp_0901.png[]
everything goes via the message bus, and our app has been transformed
fundamentally into a message processor.
+...到 <> 中的情境,
+一切都通过消息总线,我们的应用程序从根本上被转换为一个消息处理器。
+
[[map_chapter_08_after]]
-.The message bus is now the main entrypoint to the service layer
+.The message bus is now the main entrypoint to the service layer(消息总线现在是服务层的主要入口点)
image::images/apwp_0902.png[]
@@ -28,6 +35,9 @@ image::images/apwp_0902.png[]
The code for this chapter is in the
chapter_09_all_messagebus branch https://oreil.ly/oKNkn[on GitHub]:
+本章的代码位于
+chapter_09_all_messagebus 分支 https://oreil.ly/oKNkn[在 GitHub 上]:
+
----
git clone https://github.com/cosmicpython/code.git
cd code
@@ -39,6 +49,7 @@ git checkout chapter_08_events_and_message_bus
[role="pagebreak-before less_space"]
=== A New Requirement Leads Us to a New Architecture
+一个新需求引导我们走向新架构
((("situated software")))
((("events and the message bus", "transforming our app into message processor", "new requirement and new architecture")))
@@ -46,16 +57,24 @@ Rich Hickey talks about _situated software,_ meaning software that runs for
extended periods of time, managing a real-world process. Examples include
warehouse-management systems, logistics schedulers, and payroll systems.
+Rich Hickey 谈到了 _情境化软件(situated software)_,指的是运行较长时间并管理现实世界过程中事务的软件。
+例如,仓储管理系统、物流调度程序和薪资系统。
+
This software is tricky to write because unexpected things happen all the time
in the real world of physical objects and unreliable humans. For example:
+这种软件很难编写,因为在充满物理对象和不可靠的人工操作的现实世界中,总会发生意想不到的事情。例如:
+
* During a stock-take, we discover that three pass:[SPRINGY-MATTRESS]es have been
water damaged by a leaky roof.
+在盘点时,我们发现有三个 pass:[SPRINGY-MATTRESS] 因屋顶漏水而受损。
* A consignment of pass:[RELIABLE-FORK]s is missing the required documentation and is
held in customs for several weeks. Three pass:[RELIABLE-FORK]s subsequently fail safety
testing and are destroyed.
+一批 pass:[RELIABLE-FORK] 缺少必要的文件,被海关扣留了几周。随后,三件 pass:[RELIABLE-FORK] 未通过安全测试而被销毁。
* A global shortage of sequins means we're unable to manufacture our next batch
of pass:[SPARKLY-BOOKCASE].
+全球亮片短缺导致我们无法生产下一批 pass:[SPARKLY-BOOKCASE]。
((("batches", "batch quantities changed means deallocate and reallocate")))
In these types of situations, we learn about the need to change batch quantities
@@ -68,9 +87,12 @@ model elaboration.]
((("event storming")))
we model the situation as in <>.
+在这些类型的情境中,我们了解到需要在批次已经进入系统时修改其数量。可能是有人在清单上填写的数量有误,或者可能有些沙发从卡车上掉了下来。通过与业务部门的交流,脚注:[
+事件驱动建模非常流行,因此一种称为 _事件风暴(event storming)_ 的实践已经被开发出来,用于促进基于事件的需求收集和领域模型详解。]
+我们如同在 <> 中对情境进行建模。
[[batch_changed_events_flow_diagram]]
-.Batch quantity changed means deallocate and reallocate
+.Batch quantity changed means deallocate and reallocate(批次数量的变更意味着需要取消分配并重新分配)
image::images/apwp_0903.png[]
[role="image-source"]
----
@@ -90,6 +112,10 @@ quantity drops to less than the total already allocated, we need to
_deallocate_ those orders from that batch. Then each one will require
a new allocation, which we can capture as an event called `AllocationRequired`.
+一个我们称为 `批次数量变更(BatchQuantityChanged)` 的事件,应该让我们修改批次的数量,是的,但也需要应用一个 _业务规则(business rule)_:
+如果新的数量减少到小于已分配总量的情况下,我们需要从该批次中 _取消分配(deallocate)_ 这些订单。然后,每个订单都将需要重新分配,
+我们可以将其记录为一个名为 `需要分配(AllocationRequired)` 的事件。
+
Perhaps you're already anticipating that our internal message bus and events can
help implement this requirement. We could define a service called
`change_batch_quantity` that knows how to adjust batch quantities and also how
@@ -99,17 +125,27 @@ service, in separate transactions. Once again, our message bus helps us to
enforce the single responsibility principle, and it allows us to make choices about
transactions and data integrity.
+或许你已经预想到,我们的内部消息总线和事件可以帮助实现这一需求。我们可以定义一个名为 `change_batch_quantity` 的服务,
+该服务既知道如何调整批次数量,也知道如何 _取消分配_ 多余的订单项。然后,每次取消分配都可以触发一个 `AllocationRequired` 事件,
+该事件可以在单独的事务中转发到现有的 `allocate` 服务中。再一次地,我们的消息总线帮助我们遵循了单一职责原则,
+并让我们能够对事务和数据完整性做出选择。
+
==== Imagining an Architecture Change: Everything Will Be an [.keep-together]#Event Handler#
+设想架构变更:一切都将成为事件处理器
((("event handlers", "imagined architecture in which everything is an event handler")))
((("events and the message bus", "transforming our app into message processor", "imagined architecture, everything will be an event handler")))
But before we jump in, think about where we're headed. There are two
kinds of flows through our system:
+但在我们开始之前,先思考一下我们的目标。我们的系统中有两种流程:
+
* API calls that are handled by a service-layer function
+由服务层函数处理的 API 调用
* Internal events (which might be raised as a side effect of a service-layer function)
and their handlers (which in turn call service-layer functions)
+内部事件(可能是服务层函数的副作用引发的)及其处理器(它们反过来调用服务层函数)。
((("service functions", "making them event handlers")))
Wouldn't it be easier if everything was an event handler? If we rethink our API
@@ -117,8 +153,12 @@ calls as capturing events, the service-layer functions can be event handlers
too, and we no longer need to make a distinction between internal and external
event handlers:
+如果一切都是事件处理器,那岂不是更简单?如果我们将 API 调用重新构想为捕获事件,那么服务层函数也可以是事件处理器,
+我们就不再需要区分内部和外部事件处理器了:
+
* `services.allocate()` could be the handler for an
`AllocationRequired` event and could emit `Allocated` events as its output.
+`services.allocate()` 可以作为 `AllocationRequired` 事件的处理器,并将 `Allocated` 事件作为其输出。
* `services.add_batch()` could be the handler for a `BatchCreated`
event.footnote:[If you've done a bit of reading about event-driven
@@ -127,18 +167,26 @@ event handlers:
In the <>, we'll introduce the distinction
between commands and events.]
((("BatchCreated event", "services.add_batch as handler for")))
+`services.add_batch()` 可以作为 `BatchCreated` 事件的处理器。脚注:[如果你对事件驱动架构有一些了解,你可能会觉得,
+“这里的一些事件听起来更像是命令!” 请耐心些!我们正在尝试一次引入一个概念。在 <> 中,
+我们将介绍命令与事件之间的区别。]
Our new requirement will fit the same pattern:
+我们的新需求也将符合相同的模式:
+
* An event called `BatchQuantityChanged` can invoke a handler called
`change_batch_quantity()`.
((("BatchQuantityChanged event", "invoking handler change_batch_quantity")))
+一个名为 `BatchQuantityChanged` 的事件可以调用一个名为 `change_batch_quantity()` 的处理器。
* And the new `AllocationRequired` events that it may raise can be passed on to
`services.allocate()` too, so there is no conceptual difference between a
brand-new allocation coming from the API and a reallocation that's
internally triggered by a deallocation.
((("AllocationRequired event", "passing to services.allocate")))
+而它可能引发的新 `AllocationRequired` 事件也可以传递给 `services.allocate()`,这样从概念上来说,
+来自 API 的全新分配和因取消分配而内部触发的重新分配之间就没有区别了。
((("preparatory refactoring workflow")))
@@ -146,26 +194,36 @@ All sound like a bit much? Let's work toward it all gradually. We'll
follow the https://oreil.ly/W3RZM[Preparatory Refactoring] workflow, aka "Make
the change easy; then make the easy change":
+听起来有点多?让我们逐步实现这一切。我们将遵循 https://oreil.ly/W3RZM[预备性重构] 的工作流程,也称为“让变更变得简单;然后进行简单的变更”:
+
1. We refactor our service layer into event handlers. We can
get used to the idea of events being the way we describe inputs to the
system. In particular, the existing `services.allocate()` function will
become the handler for an event called `AllocationRequired`.
+我们将服务层重构为事件处理器。我们可以逐渐适应使用事件来描述系统输入的方式。特别是,
+现有的 `services.allocate()` 函数将变成名为 `AllocationRequired` 的事件的处理器。
2. We build an end-to-end test that puts `BatchQuantityChanged` events
into the system and looks for `Allocated` events coming out.
+我们编写一个端到端测试,将 `BatchQuantityChanged` 事件输入系统,并检查输出的 `Allocated` 事件。
3. Our implementation will conceptually be very simple: a new
handler for `BatchQuantityChanged` events, whose implementation will emit
`AllocationRequired` events, which in turn will be handled by the exact same
handler for allocations that the API uses.
+我们的实现从概念上讲将非常简单:一个用于处理 `BatchQuantityChanged` 事件的新处理器,
+其实现将触发 `AllocationRequired` 事件,而这些事件将由与 API 使用的完全相同的分配处理器来处理。
Along the way, we'll make a small tweak to the message bus and UoW, moving the
responsibility for putting new events on the message bus into the message bus itself.
+在此过程中,我们将对消息总线和工作单元进行一个小调整,将将新事件放入消息总线的职责转移到消息总线本身。
+
=== Refactoring Service Functions to Message Handlers
+将服务函数重构为消息处理器
((("events and the message bus", "transforming our app into message processor", "refactoring service functions to message handlers")))
((("service functions", "refactoring to message handlers")))
@@ -174,8 +232,10 @@ responsibility for putting new events on the message bus into the message bus it
We start by defining the two events that capture our current API
inputs—++AllocationRequired++ and `BatchCreated`:
+我们首先定义两个捕获当前 API 输入的事件——++AllocationRequired++ 和 `BatchCreated`:
+
[[two_new_events]]
-.BatchCreated and AllocationRequired events (src/allocation/domain/events.py)
+.BatchCreated and AllocationRequired events (src/allocation/domain/events.py)(BatchCreated 和 AllocationRequired 事件)
====
[source,python]
----
@@ -200,9 +260,13 @@ Then we rename _services.py_ to _handlers.py_; we add the existing message handl
for `send_out_of_stock_notification`; and most importantly, we change all the
handlers so that they have the same inputs, an event and a UoW:
+接着我们将 _services.py_ 重命名为 _handlers.py_;
+添加现有的 `send_out_of_stock_notification` 消息处理器;
+最重要的是,修改所有的处理器使它们具有相同的输入——一个事件和一个工作单元:
+
[[services_to_handlers]]
-.Handlers and services are the same thing (src/allocation/service_layer/handlers.py)
+.Handlers and services are the same thing (src/allocation/service_layer/handlers.py)(处理器和服务是同一回事)
====
[source,python]
----
@@ -237,8 +301,10 @@ def send_out_of_stock_notification(
The change might be clearer as a diff:
+这个更改通过差异(diff)可能会更清晰:
+
[[services_to_handlers_diff]]
-.Changing from services to handlers (src/allocation/service_layer/handlers.py)
+.Changing from services to handlers (src/allocation/service_layer/handlers.py)(从服务转换为处理器)
====
[source,diff]
----
@@ -275,8 +341,10 @@ The change might be clearer as a diff:
Along the way, we've made our service-layer's API more structured and more consistent. It was a scattering of
primitives, and now it uses well-defined objects (see the following sidebar).
+在此过程中,我们使服务层的 API 更加结构化和一致化。原本是一些散乱的原始数据,现在则使用了定义良好的对象(请参见以下侧栏)。
+
[role="nobreakinside less_space"]
-.From Domain Objects, via Primitive Obsession, to [.keep-together]#Events as an Interface#
+.From Domain Objects, via Primitive Obsession, to [.keep-together]#Events as an Interface#(从领域对象,经由基础类型强迫症,到以事件为接口)
*******************************************************************************
((("service layer", "from domain objects to primitives to events as interface")))
@@ -286,31 +354,48 @@ Some of you may remember <>, in which we changed our servic
from being in terms of domain objects to primitives. And now we're moving
back, but to different objects? What gives?
+你们中的一些人可能还记得 <>,当时我们将服务层 API 从基于领域对象改为使用原始类型。
+而现在我们又改回去了,但这次使用的是不同的对象?这意味着什么?
+
In OO circles, people talk about _primitive obsession_ as an antipattern: avoid
primitives in public APIs, and instead wrap them with custom value classes, they
would say. In the Python world, a lot of people would be quite skeptical of
that as a rule of thumb. When mindlessly applied, it's certainly a recipe for
unnecessary complexity. So that's not what we're doing per se.
+在面向对象(OO)圈子里,人们将 _primitive obsession_(原始类型痴迷)视为一种反模式:他们会建议在公共 API 中避免使用原始类型,
+而是用自定义的值类将其封装。在 _Python_ 世界中,许多人对这种经验法则持怀疑态度。不加思考地应用它,无疑会导致不必要的复杂性。
+所以,这并不是我们要做的事情。
+
The move from domain objects to primitives bought us a nice bit of decoupling:
our client code was no longer coupled directly to the domain, so the service
layer could present an API that stays the same even if we decide to make changes
to our model, and vice versa.
+从领域对象转向原始类型为我们带来了一点不错的解耦效果:我们的客户端代码不再直接与领域耦合,
+因此服务层可以提供一个即使我们决定更改模型也能保持不变的 API,反之亦然。
+
So have we gone backward? Well, our core domain model objects are still free to
vary, but instead we've coupled the external world to our event classes.
They're part of the domain too, but the hope is that they vary less often, so
they're a sensible artifact to couple on.
+那么我们是不是倒退了?其实不然:我们的核心领域模型对象依然可以自由变化,但我们将外部世界与事件类耦合在了一起。
+事件类也属于领域的一部分,但希望它们的变化频率较低,因此将它们用作耦合的目标是合理的选择。
+
And what have we bought ourselves? Now, when invoking a use case in our application,
we no longer need to remember a particular combination of primitives, but just a single
event class that represents the input to our application. That's conceptually
quite nice. On top of that, as you'll see in <>, those
event classes can be a nice place to do some input validation.
+
+那么我们得到了什么好处呢?现在,当在我们的应用中调用一个用例时,我们不再需要记住一组特定的原始类型组合,而只需处理一个代表应用输入的事件类。
+从概念上讲,这相当不错。除此之外,正如你将在 <> 中看到的,这些事件类也是一个很好的地方,用于进行输入验证。
*******************************************************************************
==== The Message Bus Now Collects Events from the UoW
+消息总线现在从工作单元中收集事件
((("message bus", "now collecting events from UoW")))
((("Unit of Work pattern", "message bus now collecting events from UoW")))
@@ -322,9 +407,13 @@ between the UoW and message bus until now, so this will make it one-way. Instea
of having the UoW _push_ events onto the message bus, we will have the message
bus _pull_ events from the UoW.
+我们的事件处理器现在需要一个工作单元。此外,随着消息总线在我们的应用中变得更加核心,将其明确负责收集和处理新事件也是合理的。
+到目前为止,工作单元和消息总线之间存在一定的循环依赖,这次修改将使其变为单向。与其让工作单元 _推送_ 事件到消息总线,
+我们将改为让消息总线从工作单元中 _拉取_ 事件。
+
[[handle_has_uow_and_queue]]
-.Handle takes a UoW and manages a queue (src/allocation/service_layer/messagebus.py)
+.Handle takes a UoW and manages a queue (src/allocation/service_layer/messagebus.py)(Handle 接受一个工作单元并管理一个队列)
====
[source,python]
[role="non-head"]
@@ -343,19 +432,26 @@ def handle(
====
<1> The message bus now gets passed the UoW each time it starts up.
+现在,每次消息总线启动时,都会将工作单元传递给它。
<2> When we begin handling our first event, we start a queue.
+当我们开始处理第一个事件时,我们会启动一个队列。
<3> We pop events from the front of the queue and invoke their handlers (the
[.keep-together]#`HANDLERS`# dict hasn't changed; it still maps event types to handler functions).
+我们从队列的前端弹出事件并调用其处理器([.keep-together]#`HANDLERS`# 字典没有变化,它仍然将事件类型映射到处理器函数)。
<4> The message bus passes the UoW down to each handler.
+消息总线将工作单元传递给每个处理器。
<5> After each handler finishes, we collect any new events that have been
generated and add them to the queue.
+每个处理器处理完成后,我们会收集所有已生成的新事件,并将它们添加到队列中。
In _unit_of_work.py_, `publish_events()` becomes a less active method,
`collect_new_events()`:
+在 _unit_of_work.py_ 中,`publish_events()` 变成了一个更少主动的方法,`collect_new_events()`:
+
[[uow_collect_new_events]]
-.UoW no longer puts events directly on the bus (src/allocation/service_layer/unit_of_work.py)
+.UoW no longer puts events directly on the bus (src/allocation/service_layer/unit_of_work.py)(工作单元不再直接将事件放到消息总线上)
====
[source,diff]
----
@@ -381,10 +477,13 @@ In _unit_of_work.py_, `publish_events()` becomes a less active method,
====
<1> The `unit_of_work` module now no longer depends on `messagebus`.
+现在,`unit_of_work` 模块不再依赖于 `messagebus`。
<2> We no longer `publish_events` automatically on commit. The message bus
is keeping track of the event queue instead.
+我们不再在提交时自动调用 `publish_events`。消息总线现在负责跟踪事件队列。
<3> And the UoW no longer actively puts events on the message bus; it
just makes them available.
+工作单元不再主动将事件放入消息总线;它只是提供了这些事件。
//IDEA: we can definitely get rid of _commit() now right?
// (EJ2) at this point _commit() doesn't serve any purpose, so it could be deleted.
@@ -392,15 +491,18 @@ In _unit_of_work.py_, `publish_events()` becomes a less active method,
[role="pagebreak-before less_space"]
==== Our Tests Are All Written in Terms of Events Too
+我们的测试现在也都是基于事件编写的
((("events and the message bus", "transforming our app into message processor", "tests writtern to in terms of events")))
((("testing", "tests written in terms of events")))
Our tests now operate by creating events and putting them on the
message bus, rather than invoking service-layer functions directly:
+我们的测试现在通过创建事件并将其放入消息总线来运行,而不是直接调用服务层函数:
+
[[handler_tests]]
-.Handler tests use events (tests/unit/test_handlers.py)
+.Handler tests use events (tests/unit/test_handlers.py)(用事件来测试处理器)
====
[source,diff]
----
@@ -434,6 +536,7 @@ class TestAddBatch:
[[temporary_ugly_hack]]
==== A Temporary Ugly Hack: The Message Bus Has to Return Results
+一个临时的丑陋解决方案:消息总线必须返回结果
((("events and the message bus", "transforming our app into message processor", "temporary hack, message bus returning results")))
((("message bus", "returning results in temporary hack")))
@@ -441,8 +544,11 @@ Our API and our service layer currently want to know the allocated batch referen
when they invoke our `allocate()` handler. This means we need to put in
a temporary hack on our message bus to let it return events:
+我们目前的 API 和服务层在调用 `allocate()` 处理器时需要知道已分配批次的引用。
+这意味着我们需要在消息总线上加入一个临时的解决方案,以使其能够返回事件:
+
[[hack_messagebus_results]]
-.Message bus returns results (src/allocation/service_layer/messagebus.py)
+.Message bus returns results (src/allocation/service_layer/messagebus.py)(消息总线返回结果)
====
[source,diff]
----
@@ -470,11 +576,14 @@ a temporary hack on our message bus to let it return events:
It's because we're mixing the read and write responsibilities in our system.
We'll come back to fix this wart in <>.
+这是因为我们在系统中混合了读取和写入职责。我们会在 <> 中回过头来修复这个缺陷。
+
==== Modifying Our API to Work with Events
+修改我们的 API 以支持事件
[[flask_uses_messagebus]]
-.Flask changing to message bus as a diff (src/allocation/entrypoints/flask_app.py)
+.Flask changing to message bus as a diff (src/allocation/entrypoints/flask_app.py)(Flask 改为使用消息总线的差异分析)
====
[source,diff]
----
@@ -497,29 +606,40 @@ We'll come back to fix this wart in <>.
<1> Instead of calling the service layer with a bunch of primitives extracted
from the request JSON...
+我们不再通过从请求 JSON 中提取的一堆原始数据来调用服务层...
<2> We instantiate an event.
+我们实例化一个事件。
<3> Then we pass it to the message bus.
+然后我们将其传递给消息总线。
And we should be back to a fully functional application, but one that's now
fully event-driven:
+这样我们就回到了一个完全功能性的应用程序,但现在它已经完全事件驱动了:
+
* What used to be service-layer functions are now event handlers.
+以前是服务层函数的部分现在变成了事件处理器。
* That makes them the same as the functions we invoke for handling internal events raised by
our domain model.
+这使得它们与我们在领域模型中处理内部事件时调用的函数相同。
* We use events as our data structure for capturing inputs to the system,
as well as for handing off of internal work packages.
+我们使用事件作为数据结构来捕获系统的输入,同时也用于传递内部工作包。
* The entire app is now best described as a message processor, or an event processor
if you prefer. We'll talk about the distinction in the
<>.
+整个应用程序现在最好被描述为一个消息处理器,或者如果你愿意的话,可以称为事件处理器。
+我们将在 <> 中讨论两者的区别。
=== Implementing Our New Requirement
+实现我们的新需求
((("reallocation", "sequence diagram for flow")))
((("events and the message bus", "transforming our app into message processor", "implementing the new requirement", id="ix_evntMBMPreq")))
@@ -529,9 +649,13 @@ inputs some new `BatchQuantityChanged` events and pass them to a handler, which
turn might emit some `AllocationRequired` events, and those in turn will go
back to our existing handler for reallocation.
+我们的重构阶段已经完成了。让我们看看是否真的“让变更变得简单”。
+现在来实现我们的新需求,如 <> 中所示:我们将接收一些新的 `BatchQuantityChanged` 事件作为输入,
+并将它们传递给处理器,而该处理器可能会触发一些 `AllocationRequired` 事件,而这些事件又将传递给我们现有的重新分配处理器。
+
[role="width-75"]
[[reallocation_sequence_diagram]]
-.Sequence diagram for reallocation flow
+.Sequence diagram for reallocation flow(重新分配流程的序列图)
image::images/apwp_0904.png[]
[role="image-source"]
----
@@ -562,18 +686,24 @@ WARNING: When you split things out like this across two units of work,
See <> for more discussion.
((("data integrity", "issues arising from splitting operation across two UoWs")))
((("Unit of Work pattern", "splitting operations across two UoWs")))
+当你像这样将逻辑分解到两个工作单元中时,你实际上会有两个数据库事务,这会导致数据完整性问题:可能会发生某些情况,
+导致第一个事务完成但第二个事务未能完成。你需要考虑这是否可以接受,以及是否需要留意这种情况发生时并采取相应的措施。
+详见 <> 了解更多讨论。
==== Our New Event
+我们的新事件
((("BatchQuantityChanged event", "implementing")))
The event that tells us a batch quantity has changed is simple; it just
needs a batch reference and a new quantity:
+告知我们批次数量已更改的事件很简单;它只需要一个批次引用和一个新的数量:
+
[[batch_quantity_changed_event]]
-.New event (src/allocation/domain/events.py)
+.New event (src/allocation/domain/events.py)(新事件)
====
[source,python]
----
@@ -586,6 +716,7 @@ class BatchQuantityChanged(Event):
[[test-driving-ch9]]
=== Test-Driving a New Handler
+测试驱动一个新的处理器
((("testing", "tests written in terms of events", "handler tests for change_batch_quantity")))
((("events and the message bus", "transforming our app into message processor", "test driving new handler")))
@@ -596,9 +727,12 @@ we can operate in "high gear" and write our unit tests at the highest
possible level of abstraction, in terms of events. Here's what they might
look like:
+根据在 <> 中学到的经验,我们可以以“高速”模式运行,
+并在尽可能高的抽象层级上编写单元测试,即基于事件。以下是它们可能的样子:
+
[[test_change_batch_quantity_handler]]
-.Handler tests for change_batch_quantity (tests/unit/test_handlers.py)
+.Handler tests for change_batch_quantity (tests/unit/test_handlers.py)(针对 change_batch_quantity 的处理器测试)
====
[source,python]
----
@@ -640,20 +774,25 @@ class TestChangeBatchQuantity:
<1> The simple case would be trivially easy to implement; we just
modify a quantity.
+简单情况的实现非常容易;我们只需修改一个数量即可。
<2> But if we try to change the quantity to less than
has been allocated, we'll need to deallocate at least one order,
and we expect to reallocate it to a new batch.
+但如果我们尝试将数量更改为小于已分配的值,我们就需要至少取消分配一个订单,并且我们期望将其重新分配到一个新批次。
==== Implementation
+实现
((("change_batch_quantity", "implementation, handler delegating to model layer")))
Our new handler is very simple:
+我们的新处理器非常简单:
+
[[change_quantity_handler]]
-.Handler delegates to model layer (src/allocation/service_layer/handlers.py)
+.Handler delegates to model layer (src/allocation/service_layer/handlers.py)(处理器委托给模型层)
====
[source,python]
----
@@ -674,8 +813,10 @@ def change_batch_quantity(
((("repositories", "new query type on our repository")))
We realize we'll need a new query type on our repository:
+我们发现需要在仓储中添加一种新的查询类型:
+
[[get_by_batchref]]
-.A new query type on our repository (src/allocation/adapters/repository.py)
+.A new query type on our repository (src/allocation/adapters/repository.py)(我们仓储上的一种新查询类型)
====
[source,python,highlight="7,22,32"]
----
@@ -724,8 +865,10 @@ class SqlAlchemyRepository(AbstractRepository):
((("faking", "FakeRepository", "new query type on")))
And on our `FakeRepository` too:
+在我们的 `FakeRepository` 中也需要添加:
+
[[fakerepo_get_by_batchref]]
-.Updating the fake repo too (tests/unit/test_handlers.py)
+.Updating the fake repo too (tests/unit/test_handlers.py)(也更新了伪造仓储)
====
[source,python]
[role="non-head"]
@@ -754,9 +897,13 @@ NOTE: We're adding a query to our repository to make this use case easier to
and the <> have some tips
on managing complex queries.
((("aggregates", "query on repository returning single aggregate")))
+我们在仓储中添加一个查询,以便更轻松地实现这一用例。只要查询返回的是单个聚合,就没有违反任何规则。如果你发现自己在仓储上编写了复杂的查询,
+可能需要考虑采用不同的设计。诸如 `get_most_popular_products` 或 `find_products_by_order_id` 之类的方法,尤其会引发我们的警觉感。
+<> 和 <> 中有一些关于管理复杂查询的建议。
==== A New Method on the Domain Model
+领域模型中的一个新方法
((("domain model", "new method on, change_batch_quantity")))
We add the new method to the model,
@@ -764,9 +911,14 @@ which does the quantity change and deallocation(s) inline
and publishes a new event.
We also modify the existing allocate function to publish an event:
+我们在模型中添加了一个新方法,
+该方法直接执行数量更改和取消分配操作,
+并发布一个新事件。
+我们还修改了现有的分配函数,使其发布一个事件:
+
[[change_batch_model_layer]]
-.Our model evolves to capture the new requirement (src/allocation/domain/model.py)
+.Our model evolves to capture the new requirement (src/allocation/domain/model.py)(我们的模型演化以满足新需求)
====
[source,python]
----
@@ -794,9 +946,11 @@ class Batch:
((("message bus", "wiring up new event handlers to")))
We wire up our new handler:
+我们将新的处理器连接起来:
+
[[full_messagebus]]
-.The message bus grows (src/allocation/service_layer/messagebus.py)
+.The message bus grows (src/allocation/service_layer/messagebus.py)(消息总线逐渐扩展)
====
[source,python]
----
@@ -811,8 +965,11 @@ HANDLERS = {
And our new requirement is fully implemented.
+至此,我们的新需求就完全实现了。
+
[[fake_message_bus]]
=== Optionally: Unit Testing Event Handlers in Isolation with a Fake Message Bus
+可选:使用假的消息总线对事件处理器进行独立的单元测试
((("message bus", "unit testing event handlers with fake message bus")))
((("testing", "tests written in terms of events", "unit testing event handlers with fake message bus")))
@@ -824,18 +981,27 @@ event handler triggers deallocation, and emits new `AllocationRequired` events,
turn are handled by their own handlers. One test covers a chain of multiple
events and handlers.
+重新分配工作流的主要测试是 _端到端_ 的(请参见 <> 中的示例代码)。它使用真正的消息总线,并测试整个流程,
+其中 `BatchQuantityChanged` 事件处理器触发取消分配,并发出新的 `AllocationRequired` 事件,这些事件又由其各自的处理器处理。
+一个测试覆盖了一连串的多个事件和处理器。
+
Depending on the complexity of your chain of events, you may decide that you
want to test some handlers in isolation from one another. You can do this
using a "fake" message bus.
+根据你的事件链的复杂性,你可能会决定对一些处理器进行彼此隔离的测试。你可以通过使用一个“假的”消息总线来实现这一点。
+
((("Unit of Work pattern", "fake message bus implemented in UoW")))
In our case, we actually intervene by modifying the `publish_events()` method
on `FakeUnitOfWork` and decoupling it from the real message bus, instead making
it record what events it sees:
+在我们的案例中,我们实际上是通过修改 `FakeUnitOfWork` 上的 `publish_events()` 方法进行干预,
+将其与真实消息总线解耦,而是让它记录所接收到的事件:
+
[[fake_messagebus]]
-.Fake message bus implemented in UoW (tests/unit/test_handlers.py)
+.Fake message bus implemented in UoW (tests/unit/test_handlers.py)(在工作单元中实现的伪造消息总线)
====
[source,python]
[role="non-head"]
@@ -858,9 +1024,13 @@ test: instead of checking all the side effects, we just check that
`BatchQuantityChanged` leads to `AllocationRequired` if the quantity drops
below the total already allocated:
+现在,当我们使用 `FakeUnitOfWorkWithFakeMessageBus` 调用 `messagebus.handle()` 时,它只会运行该事件的处理器。
+因此,我们可以编写一个更独立的单元测试:不用检查所有的副作用,我们只需验证当数量减少到小于已分配总量时,
+`BatchQuantityChanged` 是否会引发 `AllocationRequired`:
+
[role="nobreakinside less_space"]
[[test_handler_in_isolation]]
-.Testing reallocation in isolation (tests/unit/test_handlers.py)
+.Testing reallocation in isolation (tests/unit/test_handlers.py)(独立测试重新分配)
====
[source,python]
[role="non-head"]
@@ -895,8 +1065,10 @@ Whether you want to do this or not depends on the complexity of your chain of
events. We say, start out with edge-to-edge testing, and resort to
this only if necessary.
+是否需要这样做取决于你的事件链的复杂性。我们的建议是,从端到端测试开始,只有在必要时才使用这种方法。
+
[role="nobreakinside less_space"]
-.Exercise for the Reader
+.Exercise for the Reader(读者练习)
*******************************************************************************
((("message bus", "abstract message bus and its real and fake versions")))
@@ -905,14 +1077,20 @@ In the discussion of testing handlers in isolation, we used something called
`FakeUnitOfWorkWithFakeMessageBus`, which is unnecessarily complicated and
violates the SRP.
+强迫自己真正理解一些代码的一个好方法是对其进行重构。
+在讨论隔离测试处理器时,我们使用了一个叫 `FakeUnitOfWorkWithFakeMessageBus` 的东西,这样做过于复杂且违反了单一职责原则(SRP)。
+
((("Singleton pattern, messagebus.py implementing")))
If we change the message bus to being a class,footnote:[The "simple"
implementation in this chapter essentially uses the _messagebus.py_ module
itself to implement the Singleton Pattern.]
then building a `FakeMessageBus` is more straightforward:
+如果我们将消息总线改为一个类,脚注:[本章中的“简单”实现实质上是使用 _messagebus.py_ 模块本身来实现单例模式]
+那么构建一个 `FakeMessageBus` 将更加直接:
+
[[abc_for_fake_messagebus]]
-.An abstract message bus and its real and fake versions
+.An abstract message bus and its real and fake versions(一个抽象的消息总线及其真实和假的版本)
====
[source,python]
[role="skip"]
@@ -946,29 +1124,45 @@ https://github.com/cosmicpython/code/tree/chapter_09_all_messagebus[GitHub] and
working, and then write a version of `test_reallocates_if_necessary_isolated()`
from earlier.
+所以,深入了解代码:https://github.com/cosmicpython/code/tree/chapter_09_all_messagebus[GitHub],
+看看是否能够让基于类的版本运行起来,然后从之前的示例中编写一个 `test_reallocates_if_necessary_isolated()` 的版本。
+
We use a class-based message bus in <>,
if you need more inspiration.
+
+如果你需要更多灵感,我们在 <> 中使用了一个基于类的消息总线。
*******************************************************************************
=== Wrap-Up
+总结
Let's look back at what we've achieved, and think about why we did it.
+让我们回顾一下我们所取得的成果,并思考这样做的原因。
+
==== What Have We Achieved?
+我们取得了什么成就?
Events are simple dataclasses that define the data structures for inputs
and internal messages within our system. This is quite powerful from a DDD
standpoint, since events often translate really well into business language
(look up __event storming__ if you haven't already).
+事件是简单的数据类,它定义了系统内输入和内部消息的数据结构。DDD(这从领域驱动设计)的角度来看相当强大,
+因为事件通常能够很好地转化为业务语言(如果你还没了解过 __事件风暴__,可以研究一下)。
+
Handlers are the way we react to events. They can call down to our
model or call out to external services. We can define multiple
handlers for a single event if we want to. Handlers can also raise other
events. This allows us to be very granular about what a handler does
and really stick to the SRP.
+处理器(Handlers)是我们对事件作出反应的方式。它们既可以调用我们的模型,也可以调用外部服务。如果需要,我们可以为单个事件定义多个处理器。
+处理器也可以触发其他事件。这使我们能够非常细化地定义处理器的职责,并真正坚持单一职责原则(SRP)。
+
==== Why Have We Achieved?
+我们为什么要实现这些?
((("events and the message bus", "transforming our app into message processor", "whole app as message bus, trade-offs")))
((("message bus", "whole app as, trade-offs")))
@@ -979,6 +1173,10 @@ complexity (see <>), but we buy ourselves a
pattern that can handle almost arbitrarily complex requirements without needing
any further conceptual or architectural change to the way we do things.
+我们持续使用这些架构模式的目标是让应用程序的复杂性增长速度慢于其规模增长。当我们完全采用消息总线时,正如以往一样,
+我们在架构复杂性上需要付出一定的代价(详见 <>),但我们也换来了一个能够处理几乎任意复杂需求的模式,
+而无需对我们的工作方式进行任何进一步的概念性或架构性变更。
+
Here we've added quite a complicated use case (change quantity, deallocate,
start new transaction, reallocate, publish external notification), but
architecturally, there's been no cost in terms of complexity. We've added new
@@ -988,23 +1186,34 @@ how to reason about, and that are easy to explain to newcomers. Our moving
parts each have one job, they're connected to each other in well-defined ways,
and there are no unexpected side effects.
+在这里,我们添加了一个相当复杂的用例(更改数量、取消分配、启动新事务、重新分配、发布外部通知),但从架构上看,这并未增加复杂性。
+我们添加了新的事件、新的处理器以及一个新的外部适配器(用于电子邮件),这一切都属于我们的架构中已经存在的 _事物_ 类别,
+我们了解这些并知道如何进行推理,而且这些内容也很容易向新人解释。我们的各个模块各司其职,以定义明确的方式相互连接,没有意外的副作用。
+
[[chapter_09_all_messagebus_tradeoffs]]
[options="header"]
-.Whole app is a message bus: the trade-offs
+.Whole app is a message bus: the trade-offs(整个应用程序都基于消息总线:权衡取舍)
|===
-|Pros|Cons
+|Pros(优点)|Cons(缺点)
a|
* Handlers and services are the same thing, so that's simpler.
+处理器和服务是同一回事,所以这更简单。
* We have a nice data structure for inputs to the system.
+我们为系统的输入设计了一个不错的数据结构。
a|
* A message bus is still a slightly unpredictable way of doing things from
a web point of view. You don't know in advance when things are going to end.
+从 Web 视角来看,消息总线仍然是一种稍微不可预测的处理方式。你无法提前知道事情何时会结束。
* There will be duplication of fields and structure between model objects and events, which will have a maintenance cost. Adding a field to one usually means adding a field to at least
one of the others.
+模型对象和事件之间的字段和结构会有重复,这将带来维护成本。向其中一个添加字段通常意味着至少需要向其他一个也添加字段。
|===
((("events and the message bus", "transforming our app into message processor", startref="ix_evntMBMP")))
Now, you may be wondering, where are those `BatchQuantityChanged` events
going to come from? The answer is revealed in a couple chapters' time. But
first, let's talk about <>.
+
+现在,你可能会问,那些 `BatchQuantityChanged` 事件将从哪里产生?答案会在几章之后揭晓。
+但首先,让我们讨论一下 <>。
diff --git a/chapter_10_commands.asciidoc b/chapter_10_commands.asciidoc
index 09a41f6e..a9bb4383 100644
--- a/chapter_10_commands.asciidoc
+++ b/chapter_10_commands.asciidoc
@@ -1,11 +1,14 @@
[[chapter_10_commands]]
== Commands and Command Handler
+命令与命令处理器
((("commands", id="ix_cmnd")))
In the previous chapter, we talked about using events as a way of representing
the inputs to our system, and we turned our application into a message-processing
machine.
+在上一章中,我们讨论了使用事件作为表示系统输入的一种方式,并将我们的应用程序转变为一个消息处理机器。
+
To achieve that, we converted all our use-case functions to event handlers.
When the API receives a POST to create a new batch, it builds a new `BatchCreated`
event and handles it as if it were an internal event.
@@ -14,11 +17,18 @@ created yet; that's why we called the API. We're going to fix that conceptual
wart by introducing commands and showing how they can be handled by the same
message bus but with slightly different rules.
+为了实现这一点,我们将所有用例函数转换为了事件处理器。
+当 API 接收到一个用于创建新批次的 POST 请求时,它会构建一个新的 `BatchCreated` 事件,并像处理内部事件一样处理它。
+这可能会让人感觉有些违背直觉。毕竟,批次还 _没有_ 被创建;这正是我们调用 API 的原因。
+我们将通过引入命令来解决这一概念上的瑕疵,并展示如何通过相同的消息总线来处理它们,只是规则略有不同。
+
[TIP]
====
The code for this chapter is in the
chapter_10_commands branch https://oreil.ly/U_VGa[on GitHub]:
+本章的代码位于 `chapter_10_commands` 分支 https://oreil.ly/U_VGa[在 GitHub 上]:
+
----
git clone https://github.com/cosmicpython/code.git
cd code
@@ -29,6 +39,7 @@ git checkout chapter_09_all_messagebus
====
=== Commands and Events
+命令与事件
((("commands", "events versus", id="ix_cmdevnt")))
((("events", "commands versus", id="ix_evntcmd")))
@@ -36,34 +47,56 @@ Like events, _commands_ are a type of message--instructions sent by one part of
a system to another. We usually represent commands with dumb data
structures and can handle them in much the same way as events.
+与事件类似,_命令(command)_ 也是一种消息 —— 系统的一个部分发送给另一个部分的指令。
+我们通常用简单的数据结构来表示命令,并且可以用与处理事件几乎相同的方式来处理它们。
+
The differences between commands and events, though, are important.
+然而,命令和事件之间的区别是重要的。
+
Commands are sent by one actor to another specific actor with the expectation that
a particular thing will happen as a result. When we post a form to an API handler,
we are sending a command. We name commands with imperative mood verb phrases like
"allocate stock" or "delay shipment."
+命令是由一个行为者发送给另一个特定的行为者的,并期望因此发生某个特定的结果。
+当我们向一个 API 处理器提交一个表单时,我们实际上是在发送一个命令。
+我们用祈使语气的动词短语来命名命令,例如“分配库存”或“延迟发货”。
+
Commands capture _intent_. They express our wish for the system to do something.
As a result, when they fail, the sender needs to receive error information.
+命令捕获 _意图(intent)_。它们表达了我们希望系统执行某些操作的意愿。
+因此,当命令执行失败时,发送者需要接收到错误信息。
+
_Events_ are broadcast by an actor to all interested listeners. When we publish
`BatchQuantityChanged`, we don't know who's going to pick it up. We name events
with past-tense verb phrases like "order allocated to stock" or "shipment delayed."
+_事件(Event)_ 是由一个行为者广播发送给所有感兴趣的监听者的。
+当我们发布 `BatchQuantityChanged` 时,我们并不知道谁会处理它。
+我们用过去时的动词短语来命名事件,例如“订单已分配到库存”或“发货已延迟”。
+
We often use events to spread the knowledge about successful commands.
+我们经常使用事件来传播有关命令成功的信息。
+
Events capture _facts_ about things that happened in the past. Since we don't
know who's handling an event, senders should not care whether the receivers
succeeded or failed. <> recaps the differences.
+事件捕获过去已经发生的 _事实(fact)_。
+由于我们不知道谁会处理一个事件,发送者不应关心接收者是成功还是失败。
+<> 总结了它们之间的区别。
+
[[events_vs_commands_table]]
[options="header"]
-.Events versus commands
+.Events versus commands(事件与命令的对比)
|===
-e| e| Event e| Command
-| Named | Past tense | Imperative mood
-| Error handling | Fail independently | Fail noisily
-| Sent to | All listeners | One recipient
+e| e| Event(事件) e| Command(命令)
+| Named(命名) | Past tense(过去式) | Imperative mood(祈使语气)
+| Error handling(错误处理) | Fail independently(独立地失败) | Fail noisily(显式地失败)
+| Sent to(发送给) | All listeners(所有监听者) | One recipient(一个接收者)
|===
@@ -75,8 +108,10 @@ e| e| Event e| Command
((("commands", "events versus", startref="ix_cmdevnt")))
What kinds of commands do we have in our system right now?
+我们系统中目前有哪些类型的命令?
+
[[commands_dot_py]]
-.Pulling out some commands (src/allocation/domain/commands.py)
+.Pulling out some commands (src/allocation/domain/commands.py)(取出一些命令)
====
[source,python]
----
@@ -107,11 +142,15 @@ class ChangeBatchQuantity(Command): #<3>
====
<1> `commands.Allocate` will replace `events.AllocationRequired`.
+`commands.Allocate` 将取代 `events.AllocationRequired`。
<2> `commands.CreateBatch` will replace `events.BatchCreated`.
+`commands.CreateBatch` 将取代 `events.BatchCreated`。
<3> `commands.ChangeBatchQuantity` will replace `events.BatchQuantityChanged`.
+`commands.ChangeBatchQuantity` 将取代 `events.BatchQuantityChanged`。
=== Differences in Exception Handling
+异常处理的差异
((("message bus", "dispatching events and commands differently")))
@@ -121,8 +160,12 @@ Just changing the names and verbs is all very well, but that won't
change the behavior of our system. We want to treat events and commands similarly,
but not exactly the same. Let's see how our message bus changes:
+仅仅更改名称和动词很简单,但这并不会改变我们系统的行为。
+我们希望对事件和命令进行类似但不完全相同的处理。
+让我们看看我们的消息总线是如何变化的:
+
[[messagebus_dispatches_differently]]
-.Dispatch events and commands differently (src/allocation/service_layer/messagebus.py)
+.Dispatch events and commands differently (src/allocation/service_layer/messagebus.py)(区分处理事件与命令)
====
[source,python]
----
@@ -150,14 +193,18 @@ def handle( #<1>
<1> It still has a main `handle()` entrypoint that takes a `message`, which may
be a command or an event.
+它仍然有一个主要的 `handle()` 入口点,接受一个 `message`,这个消息可能是一个命令或一个事件。
<2> We dispatch events and commands to two different helper functions, shown next.
+我们将事件和命令分发到两个不同的辅助函数中,如下所示。
Here's how we handle events:
+以下是我们处理事件的方式:
+
[[handle_event]]
-.Events cannot interrupt the flow (src/allocation/service_layer/messagebus.py)
+.Events cannot interrupt the flow (src/allocation/service_layer/messagebus.py)(事件不能中断流程)
====
[source,python]
----
@@ -179,15 +226,19 @@ def handle_event(
<1> Events go to a dispatcher that can delegate to multiple handlers per
event.
+事件被发送到一个调度器,该调度器可以将每个事件委托给多个处理器。
<2> It catches and logs errors but doesn't let them interrupt
message processing.
+它会捕获并记录错误,但不会让它们中断消息处理。
((("commands", "exception handling")))
And here's how we do commands:
+以下是我们处理命令的方式:
+
[[handle_command]]
-.Commands reraise exceptions (src/allocation/service_layer/messagebus.py)
+.Commands reraise exceptions (src/allocation/service_layer/messagebus.py)(命令会重新引发异常)
====
[source,python]
----
@@ -210,12 +261,16 @@ def handle_command(
<1> The command dispatcher expects just one handler per command.
+命令调度器期望每个命令仅有一个处理器。
<2> If any errors are raised, they fail fast and will bubble up.
+如果出现任何错误,它们会快速失败并冒泡上报。
<3> `return result` is only temporary; as mentioned in <>,
it's a temporary hack to allow the message bus to return the batch
reference for the API to use. We'll fix this in <>.
+`return result` 只是暂时的;正如在 <> 中提到的,这是一个临时的解决方案,
+用于让消息总线返回批次引用以供 API 使用。我们将在 <> 中修复这个问题。
((("commands", "handlers for")))
@@ -225,8 +280,11 @@ We also change the single `HANDLERS` dict into different ones for
commands and events. Commands can have only one handler, according
to our convention:
+我们还将单一的 `HANDLERS` 字典更改为针对命令和事件的不同字典。
+根据我们的约定,命令只能有一个处理器:
+
[[new_handlers_dicts]]
-.New handlers dicts (src/allocation/service_layer/messagebus.py)
+.New handlers dicts (src/allocation/service_layer/messagebus.py)(新的处理器字典)
====
[source,python]
----
@@ -245,6 +303,7 @@ COMMAND_HANDLERS = {
=== Discussion: Events, Commands, and Error Handling
+讨论:事件、命令与错误处理
((("commands", "events, commands, and error handling", id="ix_cmndeverr")))
((("error handling", "events, commands, and", id="ix_errhnd")))
@@ -255,17 +314,27 @@ consistent state?" If we manage to process half of the events during `messagebus
out-of-memory error kills our process, how do we mitigate problems caused by the
lost messages?
+许多开发人员在这一点上会感到不安,并问:“如果一个事件处理失败会怎样?我该如何确保系统处于一致的状态?”
+如果在 `messagebus.handle` 处理了一半的事件时,一个内存不足错误导致我们的进程终止,我们该如何缓解因丢失消息引起的问题?
+
Let's start with the worst case: we fail to handle an event, and the system is
left in an inconsistent state. What kind of error would cause this? Often in our
systems we can end up in an inconsistent state when only half an operation is
completed.
+让我们从最糟糕的情况开始:我们未能处理一个事件,并且系统因此处于不一致的状态。
+什么样的错误会导致这种情况呢?通常,在我们的系统中,当只有一半的操作完成时,就可能导致进入不一致的状态。
+
For example, we could allocate three units of `DESIRABLE_BEANBAG` to a customer's
order but somehow fail to reduce the amount of remaining stock. This would
cause an inconsistent state: the three units of stock are both allocated _and_
available, depending on how you look at it. Later, we might allocate those
same beanbags to another customer, causing a headache for customer support.
+例如,我们可能会将三个单位的 `DESIRABLE_BEANBAG` 分配给了某个客户的订单,但由于某种原因却未能减少剩余库存的数量。
+这会导致不一致的状态:这三个单位的库存根据不同的视角,既被分配了,_又_ 可用。
+随后,我们可能会将同样的沙发袋分配给另一个客户,从而给客户支持部门带来麻烦。
+
((("Unit of Work pattern", "UoW managing success or failure of aggregate update")))
((("consistency boundaries", "aggregates acting as")))
((("aggregates", "acting as consistency boundaries")))
@@ -274,17 +343,27 @@ happening. We've carefully identified _aggregates_ that act as consistency
boundaries, and we've introduced a _UoW_ that manages the atomic
success or failure of an update to an aggregate.
+然而,在我们的分配服务中,我们已经采取了措施来防止这种情况的发生。
+我们已经仔细识别了作为一致性边界的 _聚合_,并引入了一个 _工作单元_,
+用于管理对聚合的更新是原子性成功或是失败。
+
((("Product object", "acting as consistency boundary")))
For example, when we allocate stock to an order, our consistency boundary is the
`Product` aggregate. This means that we can't accidentally overallocate: either
a particular order line is allocated to the product, or it is not--there's no
room for inconsistent states.
+例如,当我们将库存分配给一个订单时,我们的一致性边界是 `Product` 聚合。
+这意味着我们不可能错误地分配过多:某个特定的订单项要么被分配到产品,要么没有 —— 没有出现不一致状态的余地。
+
By definition, we don't require two aggregates to be immediately consistent, so
if we fail to process an event and update only a single aggregate, our system
can still be made eventually consistent. We shouldn't violate any constraints of
the system.
+根据定义,我们不要求两个聚合是立即一致的,因此如果我们未能处理一个事件且仅更新了一个聚合,我们的系统仍然可以实现最终一致性。
+我们不应该违反系统的任何约束。
+
With this example in mind, we can better understand the reason for splitting
messages into commands and events. When a user wants to make the system do
something, we represent their request as a _command_. That command should modify
@@ -292,14 +371,27 @@ a single _aggregate_ and either succeed or fail in totality. Any other bookkeepi
don't require the event handlers to succeed in order for the command to be
successful.
+通过这个示例,我们可以更好地理解将消息分为命令和事件的原因。
+当用户希望系统执行某些操作时,我们将他们的请求表示为一个 _命令_。
+该命令应当修改单个 _聚合_,并且要么完全成功,要么完全失败。
+任何其他的记录、清理以及通知都可以通过 _事件_ 来完成。
+命令的成功不要求事件处理器必须成功执行。
+
Let's look at another example (from a different, imaginary project) to see why not.
+让我们看另一个示例(来自一个不同的、假想的项目)来了解为什么不是这样。
+
Imagine we are building an ecommerce website that sells expensive luxury goods.
Our marketing department wants to reward customers for repeat visits. We will
flag customers as VIPs after they make their third purchase, and this will
entitle them to priority treatment and special offers. Our acceptance criteria
for this story reads as follows:
+想象一下,我们正在构建一个销售昂贵奢侈品的电商网站。
+我们的市场部门希望奖励那些多次访问的客户。
+在客户完成第三次购买后,我们会将他们标记为 VIP,这将使他们享受优先的服务和特殊优惠。
+我们针对这个需求的验收标准如下:
+
[source,gherkin]
[role="skip"]
@@ -308,8 +400,15 @@ Given a customer with two orders in their history,
When the customer places a third order,
Then they should be flagged as a VIP.
+假设一位客户的历史记录中已有两笔订单,
+当该客户下第三笔订单时,
+那么该客户应被标记为 VIP。
+
When a customer first becomes a VIP
Then we should send them an email to congratulate them
+
+当一位客户首次成为 VIP 时,
+那么我们应向他们发送一封祝贺邮件。
----
((("aggregates", "History aggregate recording orders and raising domain events")))
@@ -318,8 +417,11 @@ want to build a new `History` aggregate that records orders and can raise domain
events when rules are met. We will structure the code like this:
+使用我们在本书中已经讨论过的技术,我们决定构建一个新的 `History` 聚合,用于记录订单,并在满足规则时触发领域事件。
+我们将把代码结构化如下:
+
[[vip_customer_listing]]
-.VIP customer (example code for a different project)
+.VIP customer (example code for a different project)(VIP客户)
====
[source,python]
[role="skip"]
@@ -372,39 +474,60 @@ def congratulate_vip_customer(uow, event: CustomerBecameVip): #<4>
<1> The `History` aggregate captures the rules indicating when a customer becomes a VIP.
This puts us in a good place to handle changes when the rules become more
complex in the future.
+`History` 聚合捕获了指示客户何时成为 VIP 的规则。
+这为我们在未来规则变得更复杂时处理更改奠定了良好的基础。
<2> Our first handler creates an order for the customer and raises a domain
event `OrderCreated`.
+我们的第一个处理器为客户创建一个订单,并触发一个领域事件 `OrderCreated`。
<3> Our second handler updates the `History` object to record that an order was
[.keep-together]#created#.
+我们的第二个处理器更新 `History` 对象,以记录一个订单已创建。
<4> Finally, we send an email to the customer when they become a VIP.
+最后,当客户成为 VIP 时,我们会向他们发送一封电子邮件。
//IDEA: Sequence diagram here?
Using this code, we can gain some intuition about error handling in an
event-driven system.
+通过使用这段代码,我们可以直观地了解事件驱动系统中的错误处理。
+
((("aggregates", "raising events about")))
In our current implementation, we raise events about an aggregate _after_ we
persist our state to the database. What if we raised those events _before_ we
persisted, and committed all our changes at the same time? That way, we could be
sure that all the work was complete. Wouldn't that be safer?
+在我们当前的实现中,我们是在将状态持久化到数据库 _之后_ 触发聚合的事件。
+那么,如果我们在 _持久化之前_ 触发这些事件,并同时提交所有的更改会怎样呢?
+通过这种方式,我们可以确保所有工作都已完成。这难道不会更加安全一些吗?
+
What happens, though, if the email server is slightly overloaded? If all the work
has to complete at the same time, a busy email server can stop us from taking money
for orders.
+但如果邮件服务器稍微过载了一些会发生什么呢?
+如果所有工作都必须同时完成,那么一个繁忙的邮件服务器可能会阻止我们处理订单付款。
+
What happens if there is a bug in the implementation of the `History` aggregate?
Should we fail to take your money just because we can't recognize you as a VIP?
+如果 `History` 聚合的实现中存在一个错误会发生什么呢?
+我们是否应该仅仅因为无法将你识别为 VIP 而拒绝处理你的付款?
+
By separating out these concerns, we have made it possible for things to fail
in isolation, which improves the overall reliability of the system. The only
part of this code that _has_ to complete is the command handler that creates an
order. This is the only part that a customer cares about, and it's the part that
our business stakeholders should prioritize.
+通过将这些关注点分离,我们使得某些事情可以独立失败,从而提高了系统的整体可靠性。
+这段代码中唯一 _必须_ 完成的部分是创建订单的命令处理器。
+这是客户唯一关心的部分,也是我们的业务利益相关者应该优先考虑的部分。
+
((("commands", "events, commands, and error handling", startref="ix_cmndeverr")))
((("error handling", "events, commands, and", startref="ix_errhnd")))
((("events", "events, commands, and error handling", startref="ix_evntcmderr")))
@@ -415,9 +538,15 @@ the steps of our natural language acceptance criteria. This concordance of names
and structure helps us to reason about our systems as they grow larger and more
complex.
+请注意,我们是如何有意地将事务边界与业务流程的起点和终点对齐的。
+我们在代码中使用的名称与业务利益相关者使用的术语相匹配,
+而我们编写的处理器也与自然语言验收标准中的步骤相对应。
+这种命名与结构的一致性有助于我们在系统规模更大、更复杂时对其进行推理和理解。
+
[[recovering_from_errors]]
=== Recovering from Errors Synchronously
+同步错误恢复
((("commands", "events, commands, and error handling", "recovering from errors synchronously")))
((("errors, recovering from synchronously")))
@@ -425,14 +554,21 @@ Hopefully we've convinced you that it's OK for events to fail independently
from the commands that raised them. What should we do, then, to make sure we
can recover from errors when they inevitably occur?
+希望我们已经说服了你,事件可以独立于触发它们的命令失败是可以接受的。
+那么,当错误不可避免地发生时,我们应该如何确保能够从错误中恢复呢?
+
The first thing we need is to know _when_ an error has occurred, and for that we
usually rely on logs.
+我们首先需要知道错误 _何时_ 发生,而通常我们会依赖日志来获知。
+
((("message bus", "handle_event method")))
Let's look again at the `handle_event` method from our message bus:
+让我们再来看一下消息总线中的 `handle_event` 方法:
+
[[messagebus_logging]]
-.Current handle function (src/allocation/service_layer/messagebus.py)
+.Current handle function (src/allocation/service_layer/messagebus.py)(当前处理函数)
====
[source,python,highlight=8;12]
----
@@ -456,6 +592,9 @@ When we handle a message in our system, the first thing we do is write a log
line to record what we're about to do. For our `CustomerBecameVIP` use case, the
logs might read as follows:
+当我们在系统中处理一条消息时,我们做的第一件事就是写一条日志,以记录我们即将执行的操作。
+对于我们的 `CustomerBecameVIP` 用例,日志可能如下所示:
+
----
Handling event CustomerBecameVIP(customer_id=12345)
with handler
@@ -466,22 +605,34 @@ Because we've chosen to use dataclasses for our message types, we get a neatly
printed summary of the incoming data that we can copy and paste into a Python
shell to re-create the object.
+由于我们选择使用数据类(dataclasses)作为消息类型,我们会得到一个整齐打印的传入数据摘要,
+我们可以将其复制并粘贴到 _Python_ shell 中来重新创建该对象。
+
When an error occurs, we can use the logged data to either reproduce the problem
in a unit test or replay the message into the system.
+当发生错误时,我们可以使用日志中的数据来在单元测试中重现问题,或者将消息重新发送到系统中。
+
Manual replay works well for cases where we need to fix a bug before we can
re-process an event, but our systems will _always_ experience some background
level of transient failure. This includes things like network hiccups, table
deadlocks, and brief downtime caused by deployments.
+手动重播非常适用于需要在重新处理事件之前修复错误的情况,
+但我们的系统 _总是_ 会经历某些背景层面的瞬时故障。
+这些包括网络波动、表死锁以及部署引起的短暂停机等情况。
+
((("retries", "message bus handle_event with")))
((("message bus", "handle_event with retries")))
For most of those cases, we can recover elegantly by trying again. As the
proverb says, "If at first you don't succeed, retry the operation with an
exponentially increasing back-off period."
+对于大多数这种情况,我们可以通过重试来优雅地恢复。
+正如谚语所说:“如果最初没有成功,请以指数递增的退避时间重试操作。”
+
[[messagebus_handle_event_with_retry]]
-.Handle with retry (src/allocation/service_layer/messagebus.py)
+.Handle with retry (src/allocation/service_layer/messagebus.py)(带重试的处理)
====
[source,python]
[role="skip"]
@@ -519,22 +670,32 @@ def handle_event(
<1> Tenacity is a Python library that implements common patterns for retrying.
((("Tenacity library")))
((("retries", "Tenacity library for")))
+Tenacity 是一个 _Python_ 库,它实现了常见的重试模式。
<2> Here we configure our message bus to retry operations up to three times,
with an exponentially increasing wait between attempts.
+这里我们配置了消息总线,使其最多重试操作三次,并在尝试之间以指数递增的方式等待。
Retrying operations that might fail is probably the single best way to improve
the resilience of our software. Again, the Unit of Work and Command Handler
patterns mean that each attempt starts from a consistent state and won't leave
things half-finished.
+重试可能失败的操作可能是改善我们软件弹性的最佳方法之一。
+同样地,工作单元(Unit of Work)和命令处理器(Command Handler)模式确保每次尝试都从一致的状态开始,
+并且不会使操作半途而废。
+
WARNING: At some point, regardless of `tenacity`, we'll have to give up trying to
process the message. Building reliable systems with distributed messages is
hard, and we have to skim over some tricky bits. There are pointers to more
reference materials in the <>.
+无论使用 `tenacity` 重试多少次,我们最终还是可能不得不放弃处理某条消息。
+构建使用分布式消息的可靠系统是困难的,我们不得不略过一些棘手的部分。
+在 <> 中有更多参考资料的指引。
[role="pagebreak-before less_space"]
=== Wrap-Up
+总结
((("Command Handler pattern")))
((("events", "splitting command and events, trade-offs")))
@@ -546,32 +707,48 @@ and their own data structure is quite a fundamental thing to do. You'll
sometimes see people use the name _Command Handler_ pattern to describe what
we're doing with Events, Commands, and Message Bus.
+在本书中,我们决定先介绍事件的概念,然后再介绍命令的概念,但其他指南通常是相反的顺序。
+通过为系统可以响应的请求赋予名称和独立的数据结构,使其显式化,这是一个相当基础的工作。
+有时你会看到人们使用 _命令处理器_ (Command Handler)模式来描述我们在事件、命令和消息总线中所做的事情。
+
<> discusses some of the things you
should think about before you jump on board.
+<> 讨论了在你采纳这些概念之前需要考虑的一些事项。
+
[[chapter_10_commands_and_events_tradeoffs]]
[options="header"]
-.Splitting commands and events: the trade-offs
+.Splitting commands and events: the trade-offs(拆分命令和事件:权衡利弊)
|===
-|Pros|Cons
+|Pros(优点)|Cons(缺点)
a|
* Treating commands and events differently helps us understand which things
have to succeed and which things we can tidy up later.
+将命令和事件区别对待有助于我们理解哪些事情必须成功完成,哪些事情可以稍后再处理。
* `CreateBatch` is definitely a less confusing name than `BatchCreated`. We are
being explicit about the intent of our users, and explicit is better than
implicit, right?
+`CreateBatch` 无疑比 `BatchCreated` 更少令人困惑。
+我们明确表达了用户的意图,而明确通常比含糊更好,不是吗?
a|
* The semantic differences between commands and events can be subtle. Expect
bikeshedding arguments over the differences.
+命令和事件之间的语义差异可能十分微妙。
+因此,可以预见会有关于它们差异的无休止争论。
* We're expressly inviting failure. We know that sometimes things will break, and
we're choosing to handle that by making the failures smaller and more isolated.
This can make the system harder to reason about and requires better monitoring.
((("commands", startref="ix_cmnd")))
+我们明确地接受失败的可能性。
+我们知道有时会出问题,因此选择通过让失败更小、更隔离来应对。
+这可能会使系统更难以推理,并需要更好的监控。
|===
In <> we'll talk about using events as an integration pattern.
+
+在 <> 中,我们将讨论将事件用作一种集成模式。
// IDEA: discussion, can events raise commands?
diff --git a/chapter_11_external_events.asciidoc b/chapter_11_external_events.asciidoc
index 8460fc64..0ce7a1df 100644
--- a/chapter_11_external_events.asciidoc
+++ b/chapter_11_external_events.asciidoc
@@ -1,5 +1,6 @@
[[chapter_11_external_events]]
== Event-Driven Architecture: Using Events to Integrate Microservices
+事件驱动架构:使用事件来集成微服务
((("event-driven architecture", "using events to integrate microservices", id="ix_evntarch")))
((("external events", id="ix_extevnt")))
@@ -8,11 +9,16 @@ In the preceding chapter, we never actually spoke about _how_ we would receive
the "batch quantity changed" events, or indeed, how we might notify the
outside world about reallocations.
+在前一章中,我们实际上从未谈及 _如何_ 接收“批次数量已更改”事件,或者我们如何通知外界关于重新分配的情况。
+
We have a microservice with a web API, but what about other ways of talking
to other systems? How will we know if, say, a shipment is delayed or the
quantity is amended? How will we tell the warehouse system that an order has
been allocated and needs to be sent to a customer?
+我们有一个带有 Web API 的微服务,但与其他系统交互的其他方式呢?比如说,如果一个货运被延迟或数量被修改,
+我们怎么得知?我们又如何告诉仓储系统,一个订单已经被分配,需要发送给客户呢?
+
In this chapter, we'd like to show how the events metaphor can be extended
to encompass the way that we handle incoming and outgoing messages from the
system. Internally, the core of our application is now a message processor.
@@ -22,8 +28,12 @@ events from external sources via an external message bus (we'll use Redis pub/su
queues as an example) and publish its outputs, in the form of events, back
there as well.
+在本章中,我们希望展示如何扩展事件这一比喻,使其涵盖我们处理系统中传入和传出消息的方式。在内部,我们应用程序的核心现在是一个消息处理器。
+让我们继续深化这个思路,使其也能够在 _外部_ 成为一个消息处理器。如 <> 所示,
+我们的应用程序将通过外部消息总线(这里以 Redis 的发布/订阅队列为例)接收来自外部来源的事件,并以事件的形式将其输出发布回外部消息总线。
+
[[message_processor_diagram]]
-.Our application is a message processor
+.Our application is a message processor(我们的应用程序是一个消息处理器)
image::images/apwp_1101.png[]
[TIP]
@@ -31,6 +41,8 @@ image::images/apwp_1101.png[]
The code for this chapter is in the
chapter_11_external_events branch https://oreil.ly/UiwRS[on GitHub]:
+本章的代码在 https://oreil.ly/UiwRS[GitHub 上] 的 chapter_11_external_events 分支中:
+
----
git clone https://github.com/cosmicpython/code.git
cd code
@@ -42,6 +54,7 @@ git checkout chapter_10_commands
=== Distributed Ball of Mud, and Thinking in Nouns
+分布式泥球,与基于名词的思考方式
((("Distributed Ball of Mud antipattern", "and thinking in nouns", id="ix_DBoM")))
((("Ball of Mud pattern", "distributed ball of mud and thinking in nouns", id="ix_BoMdist")))
@@ -52,13 +65,20 @@ engineers who are trying to build out a microservices architecture. Often they
are migrating from an existing application, and their first instinct is to
split their system into _nouns_.
+在深入探讨之前,让我们先来谈谈其他选择。我们经常与正尝试构建微服务架构的工程师交流。
+他们通常在从现有应用程序迁移时,第一反应是将系统按 _名词_ 拆分。
+
What nouns have we introduced so far in our system? Well, we have batches of
stock, orders, products, and customers. So a naive attempt at breaking
up the system might have looked like <> (notice that
we've named our system after a noun, _Batches_, instead of _Allocation_).
+在我们的系统中到目前为止我们引入了哪些名词?嗯,我们有库存批次、订单、产品和客户。因此,
+一种天真的尝试是将系统拆分成类似 <> 的形式(注意,
+我们用一个名词 _Batches_ 来给我们的系统命名,而不是 _Allocation_)。
+
[[batches_context_diagram]]
-.Context diagram with noun-based services
+.Context diagram with noun-based services(基于名词的服务的上下文图)
image::images/apwp_1102.png[]
[role="image-source"]
----
@@ -80,6 +100,8 @@ Rel_D(batches, warehouse, "Sends instructions to")
Each "thing" in our system has an associated service, which exposes an HTTP API.
+我们系统中的每个“事物”都有一个相关的服务,并通过一个 HTTP API 暴露出来。
+
((("commands", "command flow to reserve stock, confirm reservation, dispatch goods, and make customer VIP")))
Let's work through an example happy-path flow in <>:
our users visit a website and can choose from products that are in stock. When
@@ -88,9 +110,13 @@ order is complete, we confirm the reservation, which causes us to send dispatch
instructions to the warehouse. Let's also say, if this is the customer's third
order, we want to update the customer record to flag them as a VIP.
+让我们通过 <> 中的一个示例“理想路径”流程来深入了解:我们的用户访问网站,可以选择有库存的产品。
+当他们将商品添加到购物车中时,我们会为他们保留一些库存。当订单完成时,我们确认这一预留操作,这会促使我们向仓储发送发货指令。
+我们还假设,如果这是客户的第三个订单,我们希望更新客户记录,以标记他们为 VIP。
+
[role="width-80"]
[[command_flow_diagram_1]]
-.Command flow 1
+.Command flow 1(命令流程 1)
image::images/apwp_1103.png[]
[role="image-source"]
----
@@ -148,27 +174,40 @@ the different symbols.
We can think of each of these steps as a command in our system: `ReserveStock`,
[.keep-together]#`ConfirmReservation`#, `DispatchGoods`, `MakeCustomerVIP`, and so forth.
+我们可以将这些步骤中的每一步视为系统中的一个命令:`ReserveStock`、[.keep-together]#`ConfirmReservation`#、`DispatchGoods`、`MakeCustomerVIP`,等等。
+
This style of architecture, where we create a microservice per database table
and treat our HTTP APIs as CRUD interfaces to anemic models, is the most common
initial way for people to approach service-oriented design.
+这种架构风格是最常见的服务化设计初始方式,其中我们为每个数据库表创建一个微服务,并将 HTTP API 视为贫血模型的 CRUD 接口。
+
This works _fine_ for systems that are very simple, but it can quickly degrade into
a distributed ball of mud.
+对于非常简单的系统来说,这种方式运转得 _还算可以_,但它很快就可能演变成一个分布式的泥球。
+
To see why, let's consider another case. Sometimes, when stock arrives at the
warehouse, we discover that items have been water damaged during transit. We
can't sell water-damaged sofas, so we have to throw them away and request more
stock from our partners. We also need to update our stock model, and that
might mean we need to reallocate a customer's order.
+要了解原因,让我们考虑另一个情况。有时候,当库存到达仓库时,我们会发现商品在运输过程中受到了水损。我们无法出售受水损的沙发,
+因此我们不得不将其丢弃,并向合作伙伴请求更多库存。同时,我们需要更新我们的库存模型,而这可能意味着我们需要重新分配客户的订单。
+
Where does this logic go?
+这种逻辑该放在哪里呢?
+
((("commands", "command flow when warehouse knows stock is damaged")))
Well, the Warehouse system knows that the stock has been damaged, so maybe it
should own this process, as shown in <>.
+嗯,仓储系统知道库存受损了,所以也许它应该负责这个流程,如 <> 所示。
+
[[command_flow_diagram_2]]
-.Command flow 2
+.Command flow 2(命令流程 2)
image::images/apwp_1104.png[]
[role="image-source"]
----
@@ -199,6 +238,9 @@ allocate stock, the Orders service drives the Batches system, which drives
Warehouse; but in order to handle problems at the warehouse, our Warehouse
system drives Batches, which drives Orders.
+这种方式也 _勉强可行_,但现在我们的依赖关系图变得一团糟。为了分配库存,订单服务驱动了批次系统,而批次系统又驱动了仓储系统;
+但为了处理仓储中的问题,我们的仓储系统又驱动了批次系统,而批次系统又驱动了订单服务。
+
Multiply this by all the other workflows we need to provide, and you can see
how services quickly get tangled up.
((("microservices", "event-based integration", "distributed Ball of Mud and thinking in nouns", startref="ix_mcroevntBoM")))
@@ -206,7 +248,10 @@ how services quickly get tangled up.
((("Ball of Mud pattern", "distributed ball of mud and thinking in nouns", startref="ix_BoMdist")))
((("Distributed Ball of Mud antipattern", "and thinking in nouns", startref="ix_DBoM")))
+将这个例子乘以我们需要支持的所有其他工作流,你就能看到服务如何迅速纠缠在一起。
+
=== Error Handling in Distributed Systems ===
+分布式系统中的错误处理
((("microservices", "event-based integration", "error handling in distributed systems", id="ix_mcroevnterr")))
((("error handling", "in distributed systems", id="ix_errhnddst")))
@@ -215,11 +260,17 @@ system when one of our requests fails? Let's say that a network error happens
right after we take a user's order for three `MISBEGOTTEN-RUG`, as shown in
<>.
+“事情会出错”是软件工程的一条普遍规律。当我们的系统中某个请求失败时会发生什么?假设在我们接收到用户订购三个 `MISBEGOTTEN-RUG` 后,
+立即发生了网络错误,如 <> 所示。
+
We have two options here: we can place the order anyway and leave it
unallocated, or we can refuse to take the order because the allocation can't be
guaranteed. The failure state of our batches service has bubbled up and is
affecting the reliability of our order service.
+在这里,我们有两个选项:我们可以继续下单,但让订单保持未分配状态,或者我们可以拒绝接受订单,因为无法保证分配成功。
+批次服务的故障状态已经冒泡上来了,并且正在影响我们订单服务的可靠性。
+
((("temporal coupling")))
((("coupling", "failure cascade as temporal coupling")))
((("commands", "command flow with error")))
@@ -229,8 +280,11 @@ of the system has to work at the same time for any part of it to work. As the
system gets bigger, there is an exponentially increasing probability that some
part is degraded.
+当两个事物必须一起被更改时,我们称它们是 _耦合的_。我们可以将这种故障级联视为一种 _时间耦合_:系统的每个部分都必须同时工作,
+任何部分才能正常运行。随着系统规模的增大,某些部分出现性能下降的概率会以指数级增长。
+
[[command_flow_diagram_with_error]]
-.Command flow with error
+.Command flow with error(带有错误的命令流程)
image::images/apwp_1105.png[]
[role="image-source"]
----
@@ -252,7 +306,7 @@ Orders --> Customer: ???
[role="nobreakinside less_space"]
[[connascence_sidebar]]
-.Connascence
+.Connascence(关联性)
*******************************************************************************
((("connascence")))
@@ -260,33 +314,50 @@ We're using the term _coupling_ here, but there's another way to describe
the relationships between our systems. _Connascence_ is a term used by some
authors to describe the different types of coupling.
+我们在这里使用了术语 _耦合_,但描述我们系统之间关系还有另一种方式。_共生关系_(Connascence)是一些作者用于描述各种耦合类型的一个术语。
+
Connascence isn't _bad_, but some types of connascence are _stronger_ than
others. We want to have strong connascence locally, as when two classes are
closely related, but weak connascence at a distance.
+共生关系并不是 _糟糕的_,但某些类型的共生关系比其他类型的 _更强_。我们希望在本地拥有强共生关系,
+例如当两个类紧密相关时,但在远距离上保持弱共生关系。
+
In our first example of a distributed ball of mud, we see Connascence of
Execution: multiple components need to know the correct order of work for an
operation to be successful.
+在我们第一个分布式泥球的例子中,我们看到了执行共生关系(Connascence of Execution):多个组件需要知道正确的工作顺序,操作才能成功。
+
When thinking about error conditions here, we're talking about Connascence of
Timing: multiple things have to happen, one after another, for the operation to
work.
+当考虑这里的错误情况时,我们讨论的是时间共生关系(Connascence of Timing):多个操作必须一个接一个地发生,才能使操作正常工作。
+
When we replace our RPC-style system with events, we replace both of these types
of connascence with a _weaker_ type. That's Connascence of Name: multiple
components need to agree only on the name of an event and the names of fields
it carries.
+当我们用事件替代基于 RPC 风格的系统时,我们用一种 _更弱_ 的共生关系替代了以上两种。
+这种关系是名称共生关系(Connascence of Name):多个组件只需要就事件的名称以及其携带的字段名称达成一致。
+
((("coupling", "avoiding inappropriate coupling")))
We can never completely avoid coupling, except by having our software not talk
to any other software. What we want is to avoid _inappropriate_ coupling.
Connascence provides a mental model for understanding the strength and type of
coupling inherent in different architectural styles. Read all about it at
http://www.connascence.io[connascence.io].
+
+我们永远无法完全避免耦合,除非让我们的软件不与任何其他软件交互。我们想要的是避免 _不恰当的_ 耦合。
+共生关系(Connascence)为理解不同架构风格中固有的耦合强度和类型提供了一种思维模型。
+详情请参阅: http://www.connascence.io[connascence.io]。
*******************************************************************************
=== The Alternative: Temporal Decoupling Using Asynchronous Messaging
+另一种选择:使用异步消息实现时间解耦
((("messaging", "asynchronous, temporal decoupling with")))
((("temporal decoupling using asynchronous messaging")))
@@ -299,17 +370,26 @@ How do we get appropriate coupling? We've already seen part of the answer, which
terms of verbs, not nouns. Our domain model is about modeling a business
process. It's not a static data model about a thing; it's a model of a verb.
+我们如何实现适当的耦合?答案的一部分我们已经见过,那就是我们应该用动词而不是名词来思考。我们的领域模型是关于建模一个业务流程的。
+它不是一个关于某个事物的静态数据模型,而是一个关于动词的模型。
+
So instead of thinking about a system for orders and a system for batches,
we think about a system for _ordering_ and a system for _allocating_, and
so on.
+因此,与其考虑一个订单系统和一个批次系统,不如考虑一个用于 _下单_ 的系统和一个用于 _分配_ 的系统,等等。
+
When we separate things this way, it's a little easier to see which system
should be responsible for what. When thinking about _ordering_, really we want
to make sure that when we place an order, the order is placed. Everything else
can happen _later_, so long as it happens.
+当我们以这种方式分离时,更容易看出每个系统应该负责什么。当我们考虑 _下单_ 时,我们真正想要的是确保当我们下了一个订单时,
+订单会被成功下达。而其他的事情只要发生了,可以 _稍后_ 再进行。
+
NOTE: If this sounds familiar, it should! Segregating responsibilities is
the same process we went through when designing our aggregates and commands.
+如果这听起来很熟悉,那是理所当然的!职责分离正是我们在设计聚合和命令时所经历的相同过程。
((("Distributed Ball of Mud antipattern", "avoiding")))
((("consistency boundaries", "microservices as")))
@@ -319,25 +399,36 @@ rely on synchronous calls. Each service accepts commands from the outside world
and raises events to record the result. Other services can listen to those
events to trigger the next steps in the workflow.
+与聚合类似,微服务也应该是 _一致性边界_。在两个服务之间,我们可以接受最终一致性,这意味着我们不需要依赖同步调用。
+每个服务从外部世界接收命令,并通过事件来记录结果。其他服务可以监听这些事件来触发工作流中的下一步操作。
+
To avoid the Distributed Ball of Mud antipattern, instead of temporally coupled HTTP
API calls, we want to use asynchronous messaging to integrate our systems. We
want our `BatchQuantityChanged` messages to come in as external messages from
upstream systems, and we want our system to publish `Allocated` events for
downstream systems to listen to.
+为了避免分布式泥球这种反模式,我们希望使用异步消息来集成系统,而不是使用时间耦合的 HTTP API 调用。
+我们希望 `BatchQuantityChanged` 消息作为来自上游系统的外部消息传入,并希望我们的系统能够发布 `Allocated` 事件供下游系统监听。
+
Why is this better? First, because things can fail independently, it's easier
to handle degraded behavior: we can still take orders if the allocation system
is having a bad day.
+为什么这种方式更好?首先,因为各部分可以独立故障,所以更容易处理降级行为:即使分配系统出现问题,我们仍然可以接收订单。
+
Second, we're reducing the strength of coupling between our systems. If we
need to change the order of operations or to introduce new steps in the process,
we can do that locally.
+其次,我们降低了系统之间耦合的强度。如果我们需要更改操作的顺序或在流程中引入新的步骤,我们可以在本地完成这些更改。
+
// IDEA: need to add an example of a process change. And/or explain "locally"
// (EJ3) I think this is clear enough. Not sure about for a junior dev.
=== Using a Redis Pub/Sub Channel for Integration
+使用 Redis 发布/订阅通道进行集成
((("message brokers")))
((("publish-subscribe system", "using Redis pub/sub channel for microservices integration")))
@@ -350,25 +441,35 @@ services. This piece of infrastructure is often called a _message broker_. The
role of a message broker is to take messages from publishers and deliver them
to subscribers.
+让我们来看一下它具体是如何工作的。我们需要某种方式将事件从一个系统传递到另一个系统,就像我们的消息总线,但这是针对服务的。
+这种基础设施通常被称为 _消息代理_(message broker)。消息代理的作用是从发布者接收消息并将其传递给订阅者。
+
At MADE.com, we use https://eventstore.org[Event Store]; Kafka or RabbitMQ
are valid alternatives. A lightweight solution based on Redis
https://redis.io/topics/pubsub[pub/sub channels] can also work just fine, and because
Redis is much more generally familiar to people, we thought we'd use it for this
book.
+在 MADE.com,我们使用 https://eventstore.org[Event Store];Kafka 或 RabbitMQ 也是有效的替代方案。一个基于 Redis 的轻量级解决方案,
+即 https://redis.io/topics/pubsub[发布/订阅通道],同样可以很好地工作。由于 Redis 更为人所熟知,因此我们决定在本书中使用它。
+
NOTE: We're glossing over the complexity involved in choosing the right messaging
platform. Concerns like message ordering, failure handling, and idempotency
all need to be thought through. For a few pointers, see
<>.
+我们在这里略过了选择合适消息平台所涉及的复杂性。比如消息排序、故障处理以及幂等性等问题,都需要仔细考虑。有关一些提示,请参阅 <>。
Our new flow will look like <>:
Redis provides the `BatchQuantityChanged` event that kicks off the whole process, and our `Allocated` event is published back out to Redis again at the
end.
+我们的新流程将会像 <> 所示:Redis 提供了 `BatchQuantityChanged` 事件来启动整个流程,
+而我们的 `Allocated` 事件在流程结束时会再次发布回 Redis。
+
[role="width-75"]
[[reallocation_sequence_diagram_with_redis]]
-.Sequence diagram for reallocation flow
+.Sequence diagram for reallocation flow(重新分配流程的序列图)
image::images/apwp_1106.png[]
[role="image-source"]
----
@@ -396,6 +497,7 @@ MessageBus -> Redis : publish to line_allocated channel
=== Test-Driving It All Using an End-to-End Test
+通过端到端测试驱动整体功能测试
((("microservices", "event-based integration", "testing with end-to-end test", id="ix_mcroevnttst")))
((("Redis pub/sub channel, using for microservices integration", "testing pub/sub model")))
@@ -403,9 +505,11 @@ MessageBus -> Redis : publish to line_allocated channel
Here's how we might start with an end-to-end test. We can use our existing
API to create batches, and then we'll test both inbound and outbound messages:
+以下是我们如何从端到端测试开始的方式。我们可以使用现有的 API 创建批次,然后测试传入和传出的消息:
+
[[redis_e2e_test]]
-.An end-to-end test for our pub/sub model (tests/e2e/test_external_events.py)
+.An end-to-end test for our pub/sub model (tests/e2e/test_external_events.py)(针对我们的发布/订阅模型的端到端测试)
====
[source,python]
----
@@ -443,9 +547,12 @@ def test_change_batch_quantity_leading_to_reallocation():
<1> You can read the story of what's going on in this test from the comments:
we want to send an event into the system that causes an order line to be
reallocated, and we see that reallocation come out as an event in Redis too.
+你可以从注释中了解此测试中发生的事情:我们希望将一个事件发送到系统中,触发一个订单项的重新分配,
+并且我们也希望看到该重新分配作为一个事件从 Redis 中发布出来。
<2> `api_client` is a little helper that we refactored out to share between
our two test types; it wraps our calls to `requests.post`.
+`api_client` 是一个小助手,我们将其重构出来以在两种测试类型之间共享;它封装了我们对 `requests.post` 的调用。
<3> `redis_client` is another little test helper, the details of which
don't really matter; its job is to be able to send and receive messages
@@ -453,11 +560,16 @@ def test_change_batch_quantity_leading_to_reallocation():
`change_batch_quantity` to send in our request to change the quantity for a
batch, and we'll listen to another channel called `line_allocated` to
look out for the expected reallocation.
+`redis_client` 是另一个小测试助手,其具体实现细节并不重要;它的任务是能够在各种 Redis 通道中发送和接收消息。
+我们将使用一个名为 `change_batch_quantity` 的通道发送请求以更改某个批次的数量,并监听另一个名为 `line_allocated` 的通道,
+用于检查预期的重新分配事件。
<4> Because of the asynchronous nature of the system under test, we need to use
the `tenacity` library again to add a retry loop—first, because it may
take some time for our new `line_allocated` message to arrive, but also
because it won't be the only message on that channel.
+由于被测试系统的异步特性,我们需要再次使用 `tenacity` 库来添加一个重试循环——一方面是因为我们的新 `line_allocated` 消息可能需要一些时间
+才能到达;另一方面是因为这条消息不会是该通道上的唯一消息。
////
NITPICK (EJ3) Minor comment: This e2e test might not be safe or repeatable as
@@ -470,15 +582,18 @@ be too much of a digression.
==== Redis Is Another Thin Adapter Around Our Message Bus
+Redis 是围绕我们的消息总线的另一个轻量级适配器
((("Redis pub/sub channel, using for microservices integration", "testing pub/sub model", "Redis as thin adapter around message bus")))
((("message bus", "Redis pub/sub listener as thin adapter around")))
Our Redis pub/sub listener (we call it an _event consumer_) is very much like
Flask: it translates from the outside world to our events:
+我们的 Redis 发布/订阅监听器(我们称之为 _事件消费者_)与 Flask 非常相似:它将外部世界的消息转化为我们的事件:
+
[[redis_eventconsumer_first_cut]]
-.Simple Redis message listener (src/allocation/entrypoints/redis_eventconsumer.py)
+.Simple Redis message listener (src/allocation/entrypoints/redis_eventconsumer.py)(简单的 Redis 消息监听器)
====
[source,python]
----
@@ -503,16 +618,20 @@ def handle_change_batch_quantity(m):
====
<1> `main()` subscribes us to the `change_batch_quantity` channel on load.
+`main()` 在加载时会将我们订阅到 `change_batch_quantity` 通道上。
<2> Our main job as an entrypoint to the system is to deserialize JSON,
convert it to a `Command`, and pass it to the service layer--much as the
Flask adapter does.
+作为系统入口的主要任务是反序列化 JSON,将其转换为一个 `Command`,并将其传递给服务层——这与 Flask 适配器的工作方式非常相似。
We also build a new downstream adapter to do the opposite job—converting
domain events to public events:
+我们还构建了一个新的下游适配器来执行相反的工作——将领域事件转换为公共事件:
+
[[redis_eventpubisher_first_cut]]
-.Simple Redis message publisher (src/allocation/adapters/redis_eventpublisher.py)
+.Simple Redis message publisher (src/allocation/adapters/redis_eventpublisher.py)(简单的 Redis 消息发布器)
====
[source,python]
----
@@ -528,15 +647,19 @@ def publish(channel, event: events.Event): #<1>
<1> We take a hardcoded channel here, but you could also store
a mapping between event classes/names and the appropriate channel,
allowing one or more message types to go to different channels.
+我们在这里使用了一个硬编码的通道,但你也可以存储事件类/名称与相应通道之间的映射关系,从而允许一种或多种消息类型发送到不同的通道。
==== Our New Outgoing Event
+我们新的传出事件
((("Allocated event")))
Here's what the `Allocated` event will look like:
+以下是 `Allocated` 事件的样子:
+
[[allocated_event]]
-.New event (src/allocation/domain/events.py)
+.New event (src/allocation/domain/events.py)(新事件)
====
[source,python]
----
@@ -552,11 +675,15 @@ class Allocated(Event):
It captures everything we need to know about an allocation: the details of the
order line, and which batch it was allocated to.
+它捕获了我们需要了解的所有有关分配的信息:订单项的详细信息以及它被分配到的批次。
+
We add it into our model's `allocate()` method (having added a test
first, naturally):
+我们将其添加到模型的 `allocate()` 方法中(当然,首先需要先添加一个测试):
+
[[model_emits_allocated_event]]
-.Product.allocate() emits new event to record what happened (src/allocation/domain/model.py)
+.Product.allocate() emits new event to record what happened (src/allocation/domain/model.py)(Product.allocate() 发出新事件以记录发生的事情)
====
[source,python]
----
@@ -584,9 +711,11 @@ class Product:
The handler for `ChangeBatchQuantity` already exists, so all we need to add
is a handler that publishes the outgoing event:
+`ChangeBatchQuantity` 的处理器已经存在,所以我们只需要添加一个发布传出事件的处理器即可:
+
[[another_handler]]
-.The message bus grows (src/allocation/service_layer/messagebus.py)
+.The message bus grows (src/allocation/service_layer/messagebus.py)(消息总线的扩展)
====
[source,python,highlight=2]
----
@@ -600,8 +729,10 @@ HANDLERS = {
((("Redis pub/sub channel, using for microservices integration", "testing pub/sub model", "publishing outgoing event")))
Publishing the event uses our helper function from the Redis wrapper:
+发布事件时会使用我们从 Redis 封装中提供的小助手函数:
+
[[publish_event_handler]]
-.Publish to Redis (src/allocation/service_layer/handlers.py)
+.Publish to Redis (src/allocation/service_layer/handlers.py)(发布到 Redis)
====
[source,python]
----
@@ -614,6 +745,7 @@ def publish_allocated_event(
====
=== Internal Versus External Events
+内部事件与外部事件
((("events", "internal versus external")))
((("microservices", "event-based integration", "testing with end-to-end test", startref="ix_mcroevnttst")))
@@ -624,25 +756,34 @@ if you get into
https://oreil.ly/FXVil[event sourcing]
(very much a topic for another book, though).
+明确区分内部事件与外部事件是一个好主意。一些事件可能来自外部,一些事件可能会被升级并发布到外部,但并不是所有事件都会如此。这一点特别重要,
+如果你深入 https://oreil.ly/FXVil[事件溯源](尽管这非常适合另一本书的话题)。
+
TIP: Outbound events are one of the places it's important to apply validation.
See <> for some validation philosophy and [.keep-together]#examples#.
+传出事件是需要应用验证的重要场所之一。有关验证的理念和 [.keep-together]#示例#,请参阅 <>。
[role="nobreakinside less_space"]
-.Exercise for the Reader
+.Exercise for the Reader(读者练习)
*******************************************************************************
A nice simple one for this chapter: make it so that the main `allocate()` use
case can also be invoked by an event on a Redis channel, as well as (or instead of)
via the API.
+本章的一个简单练习:使主要的 `allocate()` 用例既可以通过 Redis 通道上的事件调用,也可以(或替代)通过 API 调用。
+
You will likely want to add a new E2E test and feed through some changes into
[.keep-together]#__redis_eventconsumer.py__#.
+你可能需要添加一个新的端到端(E2E)测试,并将一些更改引入 [.keep-together]#__redis_eventconsumer.py__#。
+
*******************************************************************************
=== Wrap-Up
+总结
Events can come _from_ the outside, but they can also be published
externally--our `publish` handler converts an event to a message on a Redis
@@ -651,6 +792,9 @@ decoupling buys us a lot of flexibility in our application integrations, but
as always, it comes at a cost.
((("Fowler, Martin")))
+事件可以 _来自_ 外部,也可以被发布到外部——我们的 `publish` 处理器将事件转换为 Redis 通道上的消息。我们使用事件与外部世界进行通信。
+这种时间解耦为我们的应用程序集成带来了极大的灵活性,但正如往常一样,它也伴随着一定的代价。
+
++++
@@ -669,22 +813,29 @@ and modify.
<> shows some trade-offs to think about.
+<> 展示了一些需要考虑的权衡。
+
[[chapter_11_external_events_tradeoffs]]
[options="header"]
-.Event-based microservices integration: the trade-offs
+.Event-based microservices integration: the trade-offs(基于事件的微服务集成:权衡取舍)
|===
-|Pros|Cons
+|Pros(优点)|Cons(缺点)
a|
* Avoids the distributed big ball of mud.
+避免了分布式泥球问题。
* Services are decoupled: it's easier to change individual services and add
new ones.
+服务是解耦的:更容易更改单个服务并添加新服务。
a|
* The overall flows of information are harder to see.
+整体的信息流更难以直观查看。
* Eventual consistency is a new concept to deal with.
+最终一致性是需要应对的一个新概念。
* Message reliability and choices around at-least-once versus at-most-once delivery
need thinking through.
+需要仔细考虑消息可靠性以及至少一次交付与至多一次交付的选择。
|===
@@ -695,3 +846,5 @@ reliability and eventual consistency. Read on to <>.
((("microservices", "event-based integration", startref="ix_mcroevnt")))
((("event-driven architecture", "using events to integrate microservices", startref="ix_evntarch")))
((("external events", startref="ix_extevnt")))
+
+更广泛地说,如果你从同步消息模型转向异步模型,也会引入一系列与消息可靠性和最终一致性相关的问题。请继续阅读 <>。
diff --git a/chapter_12_cqrs.asciidoc b/chapter_12_cqrs.asciidoc
index c25030f7..9bba9a64 100644
--- a/chapter_12_cqrs.asciidoc
+++ b/chapter_12_cqrs.asciidoc
@@ -1,5 +1,6 @@
[[chapter_12_cqrs]]
== Command-Query Responsibility Segregation (CQRS)
+命令-查询职责分离(CQRS)
((("command-query responsibility segregation (CQRS)", id="ix_CQRS")))
((("CQRS", see="command-query responsibility segregation")))
@@ -9,16 +10,27 @@ reads (queries) and writes (commands) are different, so they
should be treated differently (or have their responsibilities segregated, if you will). Then we're going to push that insight as far
as we can.
+在本章中,我们将从一个相对没有争议的观点开始:
+读取(查询)和写入(命令)是不同的,因此它们应该被区别对待(或者说,它们的职责应该被分离)。随后,我们将尽可能地深入探讨这一观点。
+
If you're anything like Harry, this will all seem extreme at first,
but hopefully we can make the argument that it's not _totally_ unreasonable.
+如果你和 Harry 有点相似,那么一开始这一切可能看起来都有些极端,
+但希望我们能够证明这并不是 _完全_ 不合理的。
+
<> shows where we might end up.
+<> 展示了我们可能最终达到的地方。
+
[TIP]
====
The code for this chapter is in the
chapter_12_cqrs branch https://oreil.ly/YbWGT[on [.keep-together]#GitHub#].
+本章的代码位于
+chapter_12_cqrs 分支 https://oreil.ly/YbWGT[在[.keep-together]#GitHub#]上。
+
----
git clone https://github.com/cosmicpython/code.git
cd code
@@ -30,11 +42,14 @@ git checkout chapter_11_external_events
First, though, why bother?
+不过首先,为什么要费这个劲呢?
+
[[maps_chapter_11]]
-.Separating reads from writes
+.Separating reads from writes(将读取与写入分离)
image::images/apwp_1201.png[]
=== Domain Models Are for Writing
+领域模型是用于写入的
((("domain model", "writing data")))
((("command-query responsibility segregation (CQRS)", "domain models for writing")))
@@ -42,15 +57,21 @@ We've spent a lot of time in this book talking about how to build software that
enforces the rules of our domain. These rules, or constraints, will be different
for every application, and they make up the interesting core of our systems.
+在这本书中,我们花了大量时间讨论如何构建能够强制执行领域规则的软件。这些规则或约束对于每个应用程序而言都是不同的,它们构成了我们系统的有趣核心。
+
In this book, we've set explicit constraints like "You can't allocate more stock
than is available," as well as implicit constraints like "Each order line is
allocated to a single batch."
+在这本书中,我们设置了显式约束,例如“你不能分配超过可用库存的数量”,以及隐式约束,例如“每个订单项只能分配到一个批次”。
+
We wrote down these rules as unit tests at the beginning of the book:
+我们在本书开篇时将这些规则写成了单元测试:
+
[role="pagebreak-before"]
[[domain_tests]]
-.Our basic domain tests (tests/unit/test_batches.py)
+.Our basic domain tests (tests/unit/test_batches.py)(我们的基础领域测试)
====
[source,python]
----
@@ -74,27 +95,44 @@ To apply these rules properly, we needed to ensure that operations
were consistent, and so we introduced patterns like _Unit of Work_ and _Aggregate_
that help us commit small chunks of work.
+为了正确地应用这些规则,我们需要确保操作的一致性,因此我们引入了类似 _工作单元(Unit of Work)_ 和 _聚合(Aggregate)_ 这样的模式
+来帮助我们提交小块的工作。
+
To communicate changes between those small chunks, we introduced the Domain Events pattern
so we can write rules like "When stock is damaged or lost, adjust the
available quantity on the batch, and reallocate orders if necessary."
+为了在这些小块之间传递变更,我们引入了领域事件(Domain Events)模式,使我们能够编写类似这样的规则:“当库存受损或丢失时,
+调整批次中的可用数量,并在必要时重新分配订单。”
+
All of this complexity exists so we can enforce rules when we change the
state of our system. We've built a flexible set of tools for writing data.
+所有这些复杂性都存在的目的,是为了在我们更改系统状态时能够强制执行规则。我们已经构建了一套灵活的工具集来进行数据写入。
+
What about reads, though?
+那么读取呢?
+
=== Most Users Aren't Going to Buy Your Furniture
+大多数用户不会购买你的家具
((("command-query responsibility segregation (CQRS)", "reads")))
At MADE.com, we have a system very like the allocation service. In a busy day, we
might process one hundred orders in an hour, and we have a big gnarly system for
allocating stock to those orders.
+在 MADE.com,我们有一个非常类似分配服务的系统。在繁忙的一天里,我们可能每小时处理一百个订单,
+并且我们有一个复杂的大型系统用于将库存分配给这些订单。
+
In that same busy day, though, we might have one hundred product views per _second_.
Each time somebody visits a product page, or a product listing page, we need
to figure out whether the product is still in stock and how long it will take
us to deliver it.
+然而,在同样繁忙的一天里,我们每秒可能会有一百次产品浏览。
+每次有人访问产品页面或产品列表页面时,我们都需要确定产品是否仍有库存,以及需要多长时间才能交付。
+
((("eventually consistent reads")))
((("consistency", "eventually consistent reads")))
The _domain_ is the same--we're concerned with batches of stock, and their
@@ -104,8 +142,11 @@ is a few seconds out of date, but if our allocate service is inconsistent,
we'll make a mess of their orders. We can take advantage of this difference by
making our reads _eventually consistent_ in order to make them perform better.
+_领域_ 是相同的——我们关注的是库存批次、它们的到达日期以及仍然可用的数量——但访问模式却非常不同。例如,如果查询结果存在几秒的延迟,
+客户可能不会察觉到,但如果我们的分配服务出现不一致,那么我们就可能搞砸他们的订单。我们可以利用这一差异,通过使读取实现 _最终一致性_ 来提高性能。
+
[role="nobreakinside less_space"]
-.Is Read Consistency Truly Attainable?
+.Is Read Consistency Truly Attainable?(读取一致性真的可以实现吗?)
*******************************************************************************
((("command-query responsibility segregation (CQRS)", "reads", "consistency of")))
@@ -113,65 +154,94 @@ making our reads _eventually consistent_ in order to make them perform better.
This idea of trading consistency against performance makes a lot of developers
[.keep-together]#nervous# at first, so let's talk quickly about that.
+这种用性能交换一致性的想法一开始会让很多开发者感到紧张,所以让我们快速讨论一下这个问题。
+
Let's imagine that our "Get Available Stock" query is 30 seconds out of date
when Bob visits the page for `ASYMMETRICAL-DRESSER`.
Meanwhile, though, Harry has already bought the last item. When we try to
allocate Bob's order, we'll get a failure, and we'll need to either cancel his
order or buy more stock and delay his delivery.
+让我们想象一下,当 Bob 访问 `ASYMMETRICAL-DRESSER` 页面时,“获取可用库存”的查询结果已经延迟了 30 秒。与此同时,
+Harry 已经购买了最后一件商品。当我们尝试为 Bob 的订单分配库存时,会发生失败,我们要么需要取消他的订单,要么采购更多库存并延迟他的交付。
+
People who've worked only with relational data stores get _really_ nervous
about this problem, but it's worth considering two other scenarios to gain some
perspective.
+只接触过关系型数据存储的人会对这个问题感到 _非常_ 紧张,但值得通过考虑另外两种情境来获得一些不同的视角。
+
First, let's imagine that Bob and Harry both visit the page at _the same
time_. Harry goes off to make coffee, and by the time he returns, Bob has
already bought the last dresser. When Harry places his order, we send it to
the allocation service, and because there's not enough stock, we have to refund
his payment or buy more stock and delay his delivery.
+首先,假设 Bob 和 Harry 同时访问了页面。Harry 去泡咖啡了,当他回来时,Bob 已经购买了最后一个柜子。当 Harry 下订单时,
+我们将其发送到分配服务,然而由于库存不足,我们不得不退款给他,或者采购更多库存并延迟他的交付。
+
As soon as we render the product page, the data is already stale. This insight
is key to understanding why reads can be safely inconsistent: we'll always need
to check the current state of our system when we come to allocate, because all
distributed systems are inconsistent. As soon as you have a web server and two
customers, you have the potential for stale data.
+一旦我们渲染了产品页面,数据实际上已经是过时的。这个认知是理解为什么读取可以安全地不一致的关键:当我们进行分配时,
+总是需要检查系统的当前状态,因为所有分布式系统都是不一致的。一旦你有了一个网页服务器和两个客户,就有可能出现数据过时的情况。
+
OK, let's assume we solve that problem somehow: we magically build a totally
consistent web application where nobody ever sees stale data. This time Harry
gets to the page first and buys his dresser.
+好吧,让我们假设我们以某种方式解决了这个问题:我们神奇地构建了一个完全一致的 Web 应用程序,确保没有人会看到过时的数据。
+这次是 Harry 先进入页面并购买了他的柜子。
+
Unfortunately for him, when the warehouse staff tries to dispatch his furniture,
it falls off the forklift and smashes into a zillion pieces. Now what?
+不幸的是,当仓库工作人员尝试发货时,他的家具从叉车上掉下来,摔得粉碎。那么现在该怎么办呢?
+
The only options are to either call Harry and refund his order or buy more
stock and delay delivery.
+唯一的选择是要么联系 Harry 并退还他的订单,要么采购更多库存并推迟交付。
+
No matter what we do, we're always going to find that our software systems are
inconsistent with reality, and so we'll always need business processes to cope
with these edge cases. It's OK to trade performance for consistency on the
read side, because stale data is essentially unavoidable.
+
+无论我们做什么,总会发现我们的软件系统与现实存在不一致,因此我们始终需要业务流程来处理这些边缘情况。
+在读取方面,用性能换取一致性是可以接受的,因为过时数据本质上是不可避免的。
*******************************************************************************
((("command-query responsibility segregation (CQRS)", "read side and write side")))
We can think of these requirements as forming two halves of a system:
the read side and the write side, shown in <>.
+我们可以将这些需求看作系统的两个部分:读取端和写入端,如 <> 所示。
+
For the write side, our fancy domain architectural patterns help us to evolve
our system over time, but the complexity we've built so far doesn't buy
anything for reading data. The service layer, the unit of work, and the clever
domain model are just bloat.
+对于写入端,我们引入了高级的领域架构模式,帮助我们随着时间演进系统。然而,我们现有的复杂性对读取数据而言毫无帮助。
+服务层、Unit of Work,以及巧妙的领域模型在这里只是冗余。
+
[[read_and_write_table]]
-.Read versus write
+.Read versus write(读取与写入对比)
[options="header"]
|===
-| | Read side | Write side
-| Behavior | Simple read | Complex business logic
-| Cacheability | Highly cacheable | Uncacheable
-| Consistency | Can be stale | Must be transactionally consistent
+| | Read side(读取端) | Write side(写入端)
+| Behavior(行为) | Simple read(简单读取) | Complex business logic(复杂的业务逻辑)
+| Cacheability(可缓存性) | Highly cacheable(高度可缓存) | Uncacheable(不可缓存)
+| Consistency(一致性) | Can be stale(可以是过时的) | Must be transactionally consistent(必须具备事务一致性)
|===
=== Post/Redirect/Get and CQS
+Post/Redirect/Get 与 CQS
((("Post/Redirect/Get pattern")))
((("Post/Redirect/Get pattern", "command-query separation (CQS)")))
@@ -183,16 +253,24 @@ HTTP POST and responds with a redirect to see the result. For example, we might
accept a POST to _/batches_ to create a new batch and redirect the user to
_/batches/123_ to see their newly created batch.
+如果你从事 Web 开发,你可能对 Post/Redirect/Get 模式非常熟悉。在这种技术中,Web 端点接收一个 HTTP POST 请求并通过重定向来显示结果。
+例如,我们可能接收一个发到 _/batches_ 的 POST 请求来创建一个新批次,并将用户重定向到 _/batches/123_ 来查看他们新创建的批次。
+
This approach fixes the problems that arise when users refresh the results page
in their browser or try to bookmark a results page. In the case of a refresh,
it can lead to our users double-submitting data and thus buying two sofas when they
needed only one. In the case of a bookmark, our hapless customers will end up
with a broken page when they try to GET a POST endpoint.
+这种方法解决了用户在浏览器中刷新结果页面或尝试为结果页面添加书签时可能出现的问题。在刷新情况下,用户可能会重复提交数据,
+从而导致他们买了两张沙发,而实际上只需要一张。在书签情况下,当用户尝试 GET 一个 POST 端点时,会导致页面损坏,从而让顾客感到困惑。
+
Both these problems happen because we're returning data in response to a write
operation. Post/Redirect/Get sidesteps the issue by separating the read and
write phases of our operation.
+这两个问题都发生在我们在响应写操作时返回数据的情况下。Post/Redirect/Get 通过将操作的读写阶段分离开来,巧妙地避开了这些问题。
+
This technique is a simple example of command-query separation (CQS).footnote:[
We're using the terms somewhat interchangeably, but CQS is normally something you
apply to a single class or module: functions that read state should be separate from
@@ -203,10 +281,18 @@ We follow one simple rule: functions should either modify state or answer
questions, but never both. This makes software easier to reason about: we should
always be able to ask, "Are the lights on?" without flicking the light switch.
+这种技术是命令-查询分离(CQS)的一个简单示例。脚注:[我们在这里将一些术语稍微混用,但通常情况下,
+CQS 应用在单个类或模块上:负责读取状态的函数应该与修改状态的函数分离。而 CQRS 则是应用于整个应用程序的:
+负责读取状态的类、模块、代码路径,甚至数据库,都可以与负责修改状态的部分分离开来。]
+我们遵循一个简单的规则:函数应该要么修改状态,要么回答问题,但绝不能同时做这两件事。这使得软件更容易推理:我们应该始终能够问出“灯是开着的吗?”
+而无需触碰电灯开关。
+
NOTE: When building APIs, we can apply the same design technique by returning a
201 Created, or a 202 Accepted, with a Location header containing the URI
of our new resources. What's important here isn't the status code we use
but the logical separation of work into a write phase and a query phase.
+在构建 API 时,我们可以应用相同的设计技巧,通过返回一个 `201 Created` 或 `202 Accepted` 状态码,并在 Location 头部中包含新资源的 URI。
+这里重要的不是我们使用的状态码,而是将工作逻辑清晰地分为“写入阶段”和“查询阶段”。
As you'll see, we can use the CQS principle to make our systems faster and more
scalable, but first, let's fix the CQS violation in our existing code. Ages
@@ -216,9 +302,13 @@ OK and the batch ID. That's led to some ugly design flaws so that we can get
the data we need. Let's change it to return a simple OK message and instead
provide a new read-only endpoint to retrieve allocation state:
+正如你将看到的,我们可以利用 CQS 原则让系统运行得更加快速且具有可扩展性,但首先,让我们修复现有代码中违反 CQS 的情况。很久以前,
+我们引入了一个 `allocate` 端点,它接收一个订单并调用服务层来分配库存。在调用结束时,我们返回一个 200 OK 和批次 ID。为了获取所需的数据,
+这种做法导致了一些难看的设计缺陷。现在,让我们将其改为仅返回一个简单的 OK 消息,并新增一个只读端点来获取分配状态:
+
[[api_test_does_get_after_post]]
-.API test does a GET after the POST (tests/e2e/test_api.py)
+.API test does a GET after the POST (tests/e2e/test_api.py)(API 测试在 POST 之后执行 GET)
====
[source,python]
----
@@ -263,9 +353,11 @@ def test_unhappy_path_returns_400_and_error_message():
((("Flask framework", "endpoint for viewing allocations")))
OK, what might the Flask app look like?
+好的,那么 Flask 应用程序可能会像这样:
+
[[flask_app_calls_view]]
-.Endpoint for viewing allocations (src/allocation/entrypoints/flask_app.py)
+.Endpoint for viewing allocations (src/allocation/entrypoints/flask_app.py)(查看分配的端点)
====
[source,python]
----
@@ -285,9 +377,12 @@ def allocations_view_endpoint(orderid):
<1> All right, a _views.py_, fair enough; we can keep read-only stuff in there,
and it'll be a real _views.py_, not like Django's, something that knows how
to build read-only views of our data...
+好的,一个 _views.py_ 文件,听起来很合理;我们可以把只读的内容放在那里,并且它将是一个真正的 _views.py_ 文件,
+不像 Django 的那种,而是一些了解如何构建我们数据只读视图的东西...
[[hold-on-ch12]]
=== Hold On to Your Lunch, Folks
+抓稳了,各位!
((("SQL", "raw SQL in views")))
((("repositories", "adding list method to existing repository object")))
@@ -295,9 +390,11 @@ def allocations_view_endpoint(orderid):
Hmm, so we can probably just add a list method to our existing repository
object:
+嗯,那么我们可能只需要在现有的仓储对象中添加一个列表方法:
+
[[views_dot_py]]
-.Views do...raw SQL? (src/allocation/views.py)
+.Views do...raw SQL? (src/allocation/views.py)(视图中执行...原生 SQL?)
====
[source,python]
[role="non-head"]
@@ -324,6 +421,8 @@ def allocations(orderid: str, uow: unit_of_work.SqlAlchemyUnitOfWork):
_Excuse me? Raw SQL?_
+_不是哥们儿? 原生 SQL?_
+
If you're anything like Harry encountering this pattern for the first time,
you'll be wondering what on earth Bob has been smoking. We're hand-rolling our
own SQL now, and converting database rows directly to dicts? After all the
@@ -331,21 +430,31 @@ effort we put into building a nice domain model? And what about the Repository
pattern? Isn't that meant to be our abstraction around the database? Why don't
we reuse that?
+如果你和第一次遇到这种模式的 Harry 一样,你可能会疑惑 Bob 到底在抽什么东西。我们现在竟然开始手写 SQL,还直接将数据库行转换成字典?
+那我们之前花了那么多精力构建一个优雅的领域模型算什么?还有仓储模式呢?它不正是用来作为数据库的抽象层吗?为什么我们不重复利用它呢?
+
Well, let's explore that seemingly simpler alternative first, and see what it
looks like in practice.
+那么,我们先来探索一下那个看似更简单的替代方案,看看它在实际中的表现是什么样的。
+
We'll still keep our view in a separate _views.py_ module; enforcing a clear
distinction between reads and writes in your application is still a good idea.
We apply command-query separation, and it's easy to see which code modifies
state (the event handlers) and which code just retrieves read-only state (the views).
+我们仍然会将视图保存在一个单独的 _views.py_ 模块中;在应用中强制区分读操作和写操作依然是一个好主意。我们应用了命令-查询分离原则,
+这使得很容易区分哪些代码是修改状态的(事件处理器),哪些代码只是用来检索只读状态的(视图)。
+
TIP: Splitting out your read-only views from your state-modifying
command and event handlers is probably a good idea, even if you
don't want to go to full-blown CQRS.
+即使你不打算完全采用 CQRS,将只读视图与修改状态的命令和事件处理器分离开来可能也是一个好主意。
=== Testing CQRS Views
+测试 CQRS 视图
((("views", "testing CQRS views")))
((("testing", "integration test for CQRS view")))
@@ -354,9 +463,11 @@ Before we get into exploring various options, let's talk about testing.
Whichever approaches you decide to go for, you're probably going to need
at least one integration test. Something like this:
+在我们开始探索各种选项之前,先来谈谈测试。不管你决定采用哪种方法,你可能至少都需要一个集成测试。它可能会像这样:
+
[[integration_testing_views]]
-.An integration test for a view (tests/integration/test_views.py)
+.An integration test for a view (tests/integration/test_views.py)(视图的集成测试)
====
[source,python]
----
@@ -381,6 +492,7 @@ def test_allocations_view(sqlite_session_factory):
<1> We do the setup for the integration test by using the public entrypoint to
our application, the message bus. That keeps our tests decoupled from
any implementation/infrastructure details about how things get stored.
+我们通过使用应用程序的公共入口点(消息总线)来为集成测试进行设置。这样可以让我们的测试与存储方法的任何实现/基础设施细节解耦。
////
IDEA: sidebar on testing views. some old content follows.
@@ -420,15 +532,18 @@ code with more complex business logic.
=== "Obvious" Alternative 1: Using the Existing Repository
+“显而易见”的替代方案 1:使用现有的仓储
((("views", "simple view that uses the repository")))
((("command-query responsibility segregation (CQRS)", "simple view using existing repository")))
((("repositories", "simple view using existing repository")))
How about adding a helper method to our `products` repository?
+在我们的 `products` 仓储中添加一个辅助方法怎么样?
+
[[view_using_repo]]
-.A simple view that uses the repository (src/allocation/views.py)
+.A simple view that uses the repository (src/allocation/views.py)(使用仓储的简单视图)
====
[source,python]
[role="skip"]
@@ -450,19 +565,24 @@ def allocations(orderid: str, uow: unit_of_work.AbstractUnitOfWork):
<1> Our repository returns `Product` objects, and we need to find all the
products for the SKUs in a given order, so we'll build a new helper method
called `.for_order()` on the repository.
+我们的仓储返回 `Product` 对象,而我们需要根据给定订单中的 SKU 找到所有的产品,因此我们将在仓储中构建一个名为 `.for_order()` 的新辅助方法。
<2> Now we have products but we actually want batch references, so we
get all the possible batches with a list comprehension.
+现在我们有了产品,但实际上我们需要的是批次引用,因此我们使用列表推导式获取所有可能的批次。
<3> We filter _again_ to get just the batches for our specific
order. That, in turn, relies on our `Batch` objects being able to tell us
which order IDs it has allocated.
+我们 _再次_ 进行过滤,以仅获取针对特定订单的批次。这又依赖于我们的 `Batch` 对象能够告诉我们它已分配了哪些订单 ID。
We implement that last using a `.orderid` property:
+我们通过实现一个 `.orderid` 属性来完成最后一步:
+
[[orderids_on_batch]]
-.An arguably unnecessary property on our model (src/allocation/domain/model.py)
+.An arguably unnecessary property on our model (src/allocation/domain/model.py)(一个在我们的模型中可以说是多余的属性)
====
[source,python]
[role="skip"]
@@ -481,11 +601,17 @@ is not as straightforward as you might have assumed. We've had to add new helpe
methods to both, and we're doing a bunch of looping and filtering in Python, which
is work that would be done much more efficiently by the database.
+你可以开始发现,重用我们现有的仓储和领域模型类并不像你可能想象的那样简单。我们需要在两者中都添加新的辅助方法,
+而且我们在 _Python_ 中进行了一堆循环和过滤,而这些工作实际上由数据库来完成会高效得多。
+
So yes, on the plus side we're reusing our existing abstractions, but on the
downside, it all feels quite clunky.
+所以是的,好的一面是我们重用了现有的抽象,但坏的一面是,这一切看起来都相当笨拙。
+
=== Your Domain Model Is Not Optimized for Read Operations
+你的领域模型并未针对读操作进行优化
((("domain model", "not optimized for read operations")))
((("command-query responsibility segregation (CQRS)", "domain model not optimized for read operations")))
@@ -493,26 +619,38 @@ What we're seeing here are the effects of having a domain model that
is designed primarily for write operations, while our requirements for
reads are often conceptually quite different.
+我们在这里看到的是一个主要为写操作设计的领域模型所带来的影响,而我们对读操作的需求在概念上通常是完全不同的。
+
This is the chin-stroking-architect's justification for CQRS. As we've said before,
a domain model is not a data model--we're trying to capture the way the
business works: workflow, rules around state changes, messages exchanged;
concerns about how the system reacts to external events and user input.
_Most of this stuff is totally irrelevant for read-only operations_.
+这就是那些沉思的架构师们为 CQRS 提出的理由。正如我们之前所说,领域模型并不是数据模型——我们试图捕捉业务的运作方式:工作流程、
+状态变化的规则、交换的消息;以及系统如何对外部事件和用户输入作出反应的关注点。_这些内容中的大部分与只读操作完全无关_。
+
TIP: This justification for CQRS is related to the justification for the Domain
Model pattern. If you're building a simple CRUD app, reads and writes are
going to be closely related, so you don't need a domain model or CQRS. But
the more complex your domain, the more likely you are to need both.
+这种对 CQRS 的解释与领域模型模式的解释是相关的。如果你在构建一个简单的 CRUD 应用,读操作和写操作会密切相关,因此你不需要领域模型或 CQRS。
+但你的领域越复杂,就越有可能同时需要它们。
To make a facile point, your domain classes will have multiple methods for
modifying state, and you won't need any of them for read-only operations.
+简单来说,你的领域类会有多个用来修改状态的方法,而在只读操作中,你将完全不需要这些方法。
+
As the complexity of your domain model grows, you will find yourself making
more and more choices about how to structure that model, which make it more and
more awkward to use for read operations.
+随着领域模型复杂性的增加,你会发现自己需要做出越来越多关于如何构建该模型的选择,而这些选择会让它在进行读操作时显得越来越别扭。
+
=== "Obvious" Alternative 2: Using the ORM
+“显而易见”的替代方案 2:使用 ORM
((("command-query responsibility segregation (CQRS)", "view that uses the ORM")))
((("views", "simple view that uses the ORM")))
@@ -521,8 +659,10 @@ You may be thinking, OK, if our repository is clunky, and working with
`Products` is clunky, then I can at least use my ORM and work with `Batches`.
That's what it's for!
+你可能会想,好吧,如果我们的仓储很笨拙,操作 `Products` 也很笨拙,那么至少我可以使用我的 ORM,并操作 `Batches`。这不正是它的用途吗!
+
[[view_using_orm]]
-.A simple view that uses the ORM (src/allocation/views.py)
+.A simple view that uses the ORM (src/allocation/views.py)(使用 ORM 的简单视图)
====
[source,python]
[role="skip"]
@@ -548,6 +688,9 @@ version from the code example in <>? It may not look too bad up th
can tell you it took several attempts, and plenty of digging through the
SQLAlchemy docs. SQL is just SQL.
+但这真的比 <> 中代码示例中的原生 SQL 更容易编写或理解吗?从表面上看,它可能不算太糟,但我们可以告诉你,
+这实际上经历了多次尝试,并且花了大量时间查阅 SQLAlchemy 的文档。而 SQL 就只是 SQL。
+
////
IDEA (hynek)
this seems like a PERFECT opportunity to talk about SQLAlchemy Core API. If you
@@ -557,18 +700,24 @@ baby/bathwater.
But the ORM can also expose us to performance problems.
+但是,ORM 也可能会让我们面临性能问题。
+
=== SELECT N+1 and Other Performance Considerations
+SELECT N+1 和其他性能考虑因素
((("SELECT N+1")))
((("object-relational mappers (ORMs)", "SELECT N+1 performance problem")))
((("command-query responsibility segregation (CQRS)", "SELECT N+1 and other performance problems")))
-The so-called https://oreil.ly/OkBOS[`SELECT N+1`]
-problem is a common performance problem with ORMs: when retrieving a list of
-objects, your ORM will often perform an initial query to, say, get all the IDs
-of the objects it needs, and then issue individual queries for each object to
-retrieve their attributes. This is especially likely if there are any foreign-key relationships on your objects.
+ The so-called https://oreil.ly/OkBOS[`SELECT N+1`]
+ problem is a common performance problem with ORMs: when retrieving a list of
+ objects, your ORM will often perform an initial query to, say, get all the IDs
+ of the objects it needs, and then issue individual queries for each object to
+ retrieve their attributes. This is especially likely if there are any foreign-key relationships on your objects.
+
+所谓的 https://oreil.ly/OkBOS[`SELECT N+1`] 问题是 ORM 中一个常见的性能问题:在检索对象列表时,ORM 通常会执行一个初始查询,
+比如获取它需要的所有对象的 ID,然后为每个对象单独发起查询以检索其属性。如果你的对象上存在任何外键关系,这种情况尤其可能发生。
NOTE: In all fairness, we should say that SQLAlchemy is quite good at avoiding
the `SELECT N+1` problem. It doesn't display it in the preceding example, and
@@ -576,6 +725,8 @@ NOTE: In all fairness, we should say that SQLAlchemy is quite good at avoiding
explicitly to avoid it when dealing with joined objects.
((("eager loading")))
((("SQLAlchemy", "SELECT N+1 problem and")))
+平心而论,我们需要说明 SQLAlchemy 在避免 `SELECT N+1` 问题方面做得相当不错。在前面的示例中并未出现该问题,
+并且你可以通过显式请求 https://oreil.ly/XKDDm[预加载(eager loading)] 来在处理关联对象时避免该问题。
Beyond `SELECT N+1`, you may have other reasons for wanting to decouple the
way you persist state changes from the way that you retrieve current state.
@@ -584,19 +735,28 @@ write operations never cause data corruption. But retrieving data using lots
of joins can be slow. It's common in such cases to add some denormalized views,
build read replicas, or even add caching layers.
+除了 `SELECT N+1` 之外,你可能还有其他原因想要将持久化状态变化的方式与检索当前状态的方式解耦。
+一组完全范式化的关系表是一种确保写操作不会导致数据损坏的好方法。然而,使用大量连接(joins)来检索数据可能会很慢。在这种情况下,
+常见的做法是添加一些反范式的视图、构建只读副本,甚至添加缓存层。
+
=== Time to Completely Jump the Shark
+是时候彻底挑战极限了
((("views", "keeping totally separate, denormalized datastore for view model")))
((("command-query responsibility segregation (CQRS)", "denormalized copy of your data optimized for read operations")))
On that note: have we convinced you that our raw SQL version isn't so weird as
it first seemed? Perhaps we were exaggerating for effect? Just you wait.
+说到这里:我们有没有让你相信,其实我们的原生 SQL 版本并没有最初看上去那么奇怪?也许我们为了效果有些夸张?拭目以待吧。
+
So, reasonable or not, that hardcoded SQL query is pretty ugly, right? What if
we made it nicer...
+那么,不管它是否合理,那段硬编码的 SQL 查询看起来确实很难看,对吧?如果我们让它更优雅一些呢...
+
[[much_nicer_query]]
-.A much nicer query (src/allocation/views.py)
+.A much nicer query (src/allocation/views.py)(一个更好看的查询)
====
[source,python]
----
@@ -614,8 +774,10 @@ def allocations(orderid: str, uow: unit_of_work.SqlAlchemyUnitOfWork):
...by _keeping a totally separate, denormalized data store for our view model_?
+...通过 _为我们的视图模型保留一个完全独立的反范式数据存储_?
+
[[new_table]]
-.Hee hee hee, no foreign keys, just strings, YOLO (src/allocation/adapters/orm.py)
+.Hee hee hee, no foreign keys, just strings, YOLO (src/allocation/adapters/orm.py)(hia hia hia,外键也不用,就存个字符串,人生苦短先把功能完成再说😆)
====
[source,python]
----
@@ -634,22 +796,36 @@ OK, nicer-looking SQL queries wouldn't be a justification for anything really,
but building a denormalized copy of your data that's optimized for read operations
isn't uncommon, once you've reached the limits of what you can do with indexes.
+好的,更优雅的 SQL 查询并不足以作为某种解决方案的理由,但一旦你达到了索引优化的极限,
+为你的数据构建一个专门针对读操作优化的反范式化副本其实并不罕见。
+
Even with well-tuned indexes, a relational database uses a lot of CPU to perform
joins. The fastest queries will always be pass:[