Skip to content

Commit 70f42a0

Browse files
Joey0538claude
andauthored
feat(search-sync-worker): add spotlight + user-room sync via INBOX (#109)
* feat(search-sync-worker): add spotlight and user-room sync collections Add two Collection implementations in search-sync-worker that consume member_added/member_removed events from the INBOX stream and maintain the spotlight (room typeahead) and user-room (access control) Elasticsearch indexes. Index naming: - spotlight-{site}-v1-chat (overridable via SPOTLIGHT_INDEX) - user-room-{site} (overridable via USER_ROOM_INDEX) Stream naming - pkg/stream.Inbox(siteID) now returns a fully-populated Config — Name = `INBOX_{siteID}` and Subjects = `chat.inbox.{siteID}.>`. This is an additive change; inbox-worker's existing call reads only `.Name` and is unaffected. The change centralizes every stream name and subject pattern in pkg/stream/stream.go so any consumer in the repo can see at a glance which stream it binds to and with what subject filter. Cross-site federation (Sources + SubjectTransforms sourcing from remote OUTBOX streams) stays out of the baseline and is layered on by the service that owns stream creation. Collection interface - BuildAction now returns []BulkAction so a single JetStream message can fan out to zero, one, or multiple ES actions. Handler tracks per-message action ranges and acks/nakks each source message as a unit — any failed action in the range naks the whole source event for redelivery. - New FilterSubjects(siteID) so inbox-based collections can subscribe to both local (chat.inbox.{site}.member_*) and federated (chat.inbox.{site}.aggregate.member_*) variants via NATS 2.10+ consumer FilterSubjects. - StreamConfig returns jetstream.StreamConfig, converted from the canonical pkg/stream.* definition so collections never redefine stream names locally. Shared bits to remove duplication - inboxMemberCollection base struct centralizes StreamConfig (reading from pkg/stream.Inbox) and FilterSubjects for spotlight and user-room. It holds no per-instance state. - parseMemberEvent helper decodes OutboxEvent + MemberAddedPayload and validates preconditions shared by both inbox-member collections. - esPropertiesFromStruct[T any] generic consolidates template-mapping reflection — messages and spotlight share the same code path. pkg/searchengine - New ActionUpdate type. Bulk adapter emits a plain `update` meta without version/version_type because _update is a read-modify-write operation and ES rejects external versioning on it (applies to both doc-merge and scripted updates; not specific to painless). - Index action still uses external versioning (Version = evt.Timestamp) for messages and spotlight idempotency. pkg/model - New MemberAddedPayload{Subscription, Room} — the payload shape carried by OutboxEvent{Type: "member_added"} so inbox-member consumers can index without a DB lookup. - OutboxMemberAdded / OutboxMemberRemoved constants replace stringly- typed "member_added" / "member_removed" literals in the new code. pkg/subject - New InboxMemberAdded / InboxMemberRemoved builders for local-publish subjects, their Aggregate counterparts for federated (transformed) subjects, and InboxAggregatePattern for inbox-worker's future FilterSubject. InboxMemberEventSubjects returns the four-subject list used by spotlight and user-room consumers. spotlightCollection - Per-subscription docs keyed by Subscription.ID; ActionIndex on member_added, ActionDelete on member_removed, both with Version = evt.Timestamp so the external-version check makes out-of-order delivery safe. - Template pattern `spotlight-*` with search_as_you_type on roomName via a whitespace/lowercase custom analyzer. userRoomCollection (multi-pod safe) - One doc per user, keyed by user account. rooms is a plain string array used by the search service as a `terms` filter on message search queries. - member_added emits ActionUpdate with a painless script + upsert; member_removed emits ActionUpdate with a painless script only. - Restricted rooms (Subscription.HistorySharedSince != nil) are skipped — the search service handles those via DB+cache at query time. - Per-room LWW guard in the scripts: each doc carries a flattened roomTimestamps map of roomId -> last-applied event timestamp. Both scripts read the stored timestamp, compare to params.ts, and set ctx.op = 'none' if the incoming event is stale — ES skips the write entirely (no version bump, no disk I/O). This makes user-room-sync safe to run with multiple pods sharing the durable consumer: ES's primary-shard per-doc atomicity serializes concurrent _update operations and the guard converges on highest-timestamp-wins regardless of physical arrival order. - Timestamp source is OutboxEvent.Timestamp (publish time) NOT Subscription.JoinedAt, because JoinedAt is immutable on the subscription row and both added/removed events for the same subscription would otherwise carry the same value and become indistinguishable to the guard. - Template pattern `user-room-*` maps rooms as text+keyword (keeping existing query behavior) and roomTimestamps as `flattened` to avoid mapping explosion as new roomIds accumulate. - Remove path carries only rid + ts — no `now`, no updatedAt stamp, because removal has no user-visible doc mutation to timestamp. Bootstrap config (nested, test-only) - New bootstrapConfig struct groups fields that are meaningful ONLY when the worker is standing up its own streams in dev / integration tests. Env vars are all prefixed BOOTSTRAP_ so they're obvious in deployment manifests. - BOOTSTRAP_STREAMS (bool) — toggles CreateOrUpdateStream. - BOOTSTRAP_REMOTE_SITE_IDS (list) — cross-site OUTBOX sources to attach to INBOX during bootstrap. - In production streams are owned by their publishers (message-gatekeeper for MESSAGES_CANONICAL, inbox-worker for INBOX) and search-sync-worker only manages its own durable consumers. Neither bootstrap field is consulted. - Collections hold NO remote-site state. The bootstrap loop in main.go detects the INBOX stream by comparing against `stream.Inbox(cfg.SiteID).Name` and swaps the collection's baseline config for inboxBootstrapStreamConfig (which layers on cross-site Sources + SubjectTransforms) before calling CreateOrUpdateStream. Stream creation is deduped by name so spotlight + user-room don't double-create the shared INBOX stream. Consumers - Per-purpose durable names: message-sync, spotlight-sync, user-room-sync. Graceful shutdown waits on all three runConsumer goroutines via a doneChs slice. Scope note - inbox-worker is intentionally NOT modified here. The enhanced INBOX behavior (publishing, consuming aggregate.* events, migrating the handler to the new MemberAddedPayload shape) ships in a separate PR. The pkg/stream.Inbox change in this PR is additive — inbox-worker reads only Name and is unaffected. https://claude.ai/code/session_01XTmSpmv5dT6UXX7NpRdYqN * feat(search-sync-worker): support bulk invite via multi-subscription member events Extend MemberAddedPayload from a single Subscription to []Subscriptions so a single room-worker publish can carry N users being added/removed from the same room in one admin action. Rewire BuildAction, the handler's buffer accounting, and the consumer loop so fan-out events are bounded correctly against the ES bulk request limit. Why --- Bulk invite (admin invites 5 users to a room at once) is a real use case. The previous event shape forced room-worker to publish N events for one admin action, which worked for 1:1 delivery but now that we've committed to fan-out semantics downstream, pushing the bulk shape through the event model is cleaner: one publish, one event, one atomic DB write at the producer, and the subscription list travels together. This commit lands the payload schema change plus every consumer-side adjustment needed to ingest it safely. Payload schema (pkg/model/event.go) ----------------------------------- - MemberAddedPayload.Subscription Subscription → Subscriptions []Subscription - All subscriptions in one event MUST target the same Room (documented on the struct). - Round-trip test in pkg/model updated to exercise two subscriptions (one restricted, one unrestricted) in one payload. Fan-out in collections (search-sync-worker) ------------------------------------------- - spotlightCollection.BuildAction now loops over Subscriptions and emits len(subs) actions per event. All actions from one event share the same external Version (evt.Timestamp) so a redelivery 409s uniformly. - userRoomCollection.BuildAction loops and emits one ActionUpdate per subscription, each keyed by a different user account (distinct ES docs). Restricted-room filtering (HistorySharedSince != nil) moves INSIDE the loop so a mixed bulk invite (some restricted, some not) only produces actions for the unrestricted subscriptions. If every subscription in the event is restricted, BuildAction returns an empty slice and the handler acks the source message without touching ES — same path as existing filter-out semantics. - messageCollection is unchanged. Message sync stays strictly 1:1 — fan-out is only for member events. - newSpotlightSearchIndex now takes (*Subscription, *Room) instead of *MemberAddedPayload so it can be called inside the loop. Handler action-count bookkeeping (handler.go) --------------------------------------------- The handler already tracked per-message action ranges (pendingMsg.actionStart / actionCount), so Flush's ack-all-or-nak-all-per-source logic is already fan-out-correct. The change is in the public API: - New ActionCount() — count of buffered ES bulk actions. This is the quantity that should drive the flush decision for fan-out collections. - Renamed BufferLen() → MessageCount() to make it unambiguous that this is the source-message count, not the action count. Used for diagnostics and the per-source ack/nak accounting. - Removed BufferFull() — it was checking message count against the batch size, which is wrong for fan-out. Callers now compare ActionCount() directly. - Renamed the Handler's internal field batchSize → bulkSize to reflect that it bounds buffered actions, not messages. Consumer loop split: FETCH_BATCH_SIZE vs BULK_BATCH_SIZE (main.go) ------------------------------------------------------------------ Previously one BATCH_SIZE env conflated two distinct concerns. Split into two clearly-named variables so operators can tune them independently and so readers can tell which concern any given value relates to: - FETCH_BATCH_SIZE (default 100): max JetStream messages pulled per cons.Fetch() round-trip. Pure JetStream-client knob — does NOT bound ES bulk size. - BULK_BATCH_SIZE (default 500): soft cap on buffered ES bulk actions before a flush is triggered. The real ES-side bound. - FLUSH_INTERVAL (unchanged): max seconds before a time-based flush. runConsumer is rewritten to be fan-out-safe: 1. Before each Fetch, clamp fetchCount to min(FETCH_BATCH_SIZE, BULK_BATCH_SIZE - ActionCount()). This prevents a steady stream of 1:1 messages from overshooting BULK_BATCH_SIZE. 2. A mid-message-loop flush catches the single-fat-message case: if one fan-out event alone pushes ActionCount past BULK_BATCH_SIZE, flush immediately before processing the next message in the fetch batch — otherwise the next message's actions would add to an already-oversized bulk request. 3. Outer flush conditions unchanged: BULK_BATCH_SIZE hit → flush, FLUSH_INTERVAL elapsed with non-empty buffer → flush. Tests ----- Unit tests: - pkg/model: TestMemberAddedPayloadJSON now uses a 2-subscription fixture (one restricted, one not). - spotlight_test: new baseBulkMemberAddedPayload helper; TestSpotlightCollection_BuildAction_BulkInvite verifies 3 subs → 3 actions with shared Version; TestSpotlightCollection_BuildAction_BulkRemove verifies 2 subs → 2 ActionDelete actions. - user_room_test: new TestUserRoomCollection_BuildAction_BulkInvite (3 unrestricted subs → 3 distinct user doc updates); TestUserRoomCollection_BuildAction_BulkInviteMixedRestricted (2 of 4 subs are restricted → only 2 actions emitted); TestUserRoomCollection_BuildAction_AllRestrictedIsNoOp (every sub restricted → empty slice, no error). - handler_test: new fanOutCollection stub (emits N actions per msg); TestHandler_FanOut covers (a) MessageCount/ActionCount diverge, (b) all fan-out succeed → source acked, (c) any fan-out fails → source nakked, (d) multi-message mixed success — only the message whose range contains a failure gets nakked, the other acks independently. - Existing tests updated: payload.Subscription.X → payload.Subscriptions[0].X; BufferLen calls renamed to MessageCount. Integration tests: - New buildBulkMemberEventPayload helper for multi-sub scenarios, with a memberFixture struct for clean (account, subID, restricted) rows. Single-sub helper delegates to it. - New TestSpotlightSync_BulkInvite: publishes one event with 3 subscriptions, drains 1 JetStream message, asserts 3 spotlight docs land; then publishes bulk remove and asserts all 3 are gone. - New TestUserRoomSync_BulkInvite: publishes one event with 4 subscriptions (2 restricted), drains 1 message, asserts only the 2 unrestricted users got upserted; then bulk-remove asserts the 2 user docs have empty rooms arrays (ghost docs retained for LWW monotonicity) while restricted users are still absent. https://claude.ai/code/session_01XTmSpmv5dT6UXX7NpRdYqN * refactor(search-sync-worker): rename FlushInterval, tighten 404 handling, fail fast on bad config Follow-up fixes to CodeRabbit's review of 201c715 plus a naming consistency pass on the flush-interval config. Rename FlushInterval -> BulkFlushInterval ------------------------------------------ FlushInterval alone was ambiguous — what gets flushed? The variable it partners with (BulkBatchSize) already has the Bulk prefix, and FlushInterval sitting next to it without the prefix looked like an unrelated concept. Rename both the Go field and the env var so the two ES-bulk-flush triggers (size-based BulkBatchSize, time-based BulkFlushInterval) share a consistent Bulk* prefix. FlushInterval (int) -> BulkFlushInterval (int) FLUSH_INTERVAL (env) -> BULK_FLUSH_INTERVAL (env) No back-compat shim — the field only ships in this unmerged PR, so there's no deploy consuming the old name. Tighten 404 handling in handler.go (CodeRabbit 🟠 major) -------------------------------------------------------- Previous commit treated ANY 404 on Delete/Update as idempotent success. That was too broad: `index_not_found_exception` at 404 means the backing index/template is missing or misconfigured, and silently acking those would drop messages on a bad deploy with no feedback. Fix: pkg/searchengine.BulkResult: - New ErrorType field (populated from the ES bulk item error.type). - BulkResult.Error (Reason) remains human-readable; ErrorType is the machine-readable classifier callers should match on. pkg/searchengine/adapter.go: - Propagates detail.Error.Type into BulkResult.ErrorType alongside the existing Reason. search-sync-worker/handler.go isBulkItemSuccess: - Delete 404: success ONLY when ErrorType is empty (delete on a missing doc sets result=not_found with no error block). Any other error type at 404 (notably index_not_found_exception) is a real failure. - Update 404: success ONLY when ErrorType == "document_missing_exception" (user-room remove on an empty doc). index_not_found_exception or any unfamiliar error type fails closed. - Index 404: always a failure (unchanged — indexing should create the doc, so 404 means the index itself is missing). Updated TestIsBulkItemSuccess with 14 cases covering the new shape: document_missing_exception vs index_not_found_exception on both delete and update, plus an "unknown error type at 404" fail-closed case. Updated TestHandler_Flush_404OnDeleteAndUpdate to include end-to-end cases where ErrorType is index_not_found_exception on both delete and update actions — these messages must be nakked for redelivery, not silently acked. Updated pkg/searchengine/adapter_test.go: TestAdapter_Bulk now has a "bulk error types propagate" subtest that verifies document_missing_exception and index_not_found_exception flow into BulkResult.ErrorType correctly. Fail fast on non-positive batch/interval settings (CodeRabbit 🟠 major) ---------------------------------------------------------------------- runConsumer assumes FetchBatchSize, BulkBatchSize, and BulkFlushInterval are all > 0 — otherwise: - FetchBatchSize <= 0 would call Fetch(0) or go negative and hit the remaining<=0 fast path forever (busy loop). - BulkBatchSize <= 0 keeps remaining negative forever (stall). - BulkFlushInterval <= 0 makes the time-based flush check fire on every iteration. Add startup validation in main.go immediately after config parsing so an operator gets a clear slog.Error + os.Exit(1) with the offending setting name and value. Matches CLAUDE.md's "fail fast on bad config" rule. pkg/stream/stream_test.go: convert to testify (CodeRabbit 🟡 minor) ------------------------------------------------------------------ Replaced t.Errorf / t.Fatalf with assert.Equal / require.Len to match the repo-wide "use testify" guideline in CLAUDE.md §4. template.go: guard empty/ignored json names (CodeRabbit 🟢 nit) --------------------------------------------------------------- esPropertiesFromStruct previously would emit a mapping entry under "" or "-" if a future struct had an `es` tag but no usable `json` tag. That would silently corrupt the template. Added a skip guard with a doc comment explaining the fail-closed policy. inbox_integration_test.go: propagate historyShared timestamp (CodeRabbit 🟢 nit) ------------------------------------------------------------------------------- memberFixture used to collapse historyShared into a boolean Restricted flag, dropping the caller's timestamp value. Now carries HistorySharedSince *time.Time verbatim and uses the Restricted bool only as a shortcut for "pick a synthetic timestamp for me." Doc comment spells out the precedence. https://claude.ai/code/session_01XTmSpmv5dT6UXX7NpRdYqN * refactor(natsutil): add Ack/Nak helpers, use in search-sync-worker Add a pair of shared helpers in pkg/natsutil for the repeating "ack/nak a JetStream message and log the error" pattern so every service in the repo can use the same shape. Convert search-sync-worker's handler.go to use them — that's the PR's motivating case. Why --- The pattern appears 18 times across 7 services today (message-gatekeeper, broadcast-worker, inbox-worker, search-sync-worker, room-worker, notification-worker, message-worker), with divergent spellings: "failed to ack message" vs "ack failed" vs "ack malformed message" "failed to nack message" vs "failed to nak message" vs "nak failed" "error" vs "err" key in the slog call Consolidating gives us: 1. One place to add tracing spans, metrics counters, or delivery-context fields later instead of 18. 2. A consistent structured-log shape ("reason" + "error") so operators can query by cause across services in log aggregation. 3. Less visual noise at the call site — `natsutil.Ack(msg, "filtered")` reads as intent; `if err := msg.Ack(); err != nil { slog.Error(...) }` is mechanical boilerplate. Scope ----- This commit does ONLY (a) the helper + tests and (b) the search-sync-worker conversion. The 6 other services that do the same pattern (13 call sites) are intentionally left alone so this PR stays focused on spotlight/user-room sync. A small, mechanical follow-up PR will migrate them and normalize the divergent spellings. pkg/natsutil/ack.go ------------------- - `Acker` / `Naker` are minimal interfaces (`{ Ack() error }` / `{ Nak() error }`). Both `jetstream.Msg` (nats.go) and otel-wrapped variants (oteljetstream.Msg) satisfy them, so the helpers work for every consumer in the repo without a wrapper type. - `Ack(msg, reason)` / `Nak(msg, reason)` try the op and log any failure with `slog.Error("ack failed", "reason", ..., "error", ...)`. Fire-and-forget by design — the caller doesn't branch on the result. - `reason` is a short label describing WHY the message is being acked or nakked so operators can query logs by cause. pkg/natsutil/ack_test.go ------------------------ - Covers success, swallowed error, and compile-time interface satisfaction via a tiny stubMsg test double. search-sync-worker/handler.go ----------------------------- Five call sites converted: - Add() malformed payload → natsutil.Ack(msg, "build action failed") - Add() filtered event → natsutil.Ack(msg, "filtered, no actions") - Flush() all-succeeded → natsutil.Ack(p.jsMsg, "bulk actions succeeded") - Flush() any-failed → natsutil.Nak(p.jsMsg, "bulk action failed") - nakAll() loop body → natsutil.Nak(p.jsMsg, reason) nakAll gains a `reason string` parameter so its two Flush call sites ("bulk request failed", "bulk result count mismatch") emit distinct reasons downstream — one shared helper, two distinct log labels. https://claude.ai/code/session_01XTmSpmv5dT6UXX7NpRdYqN * feat(search-sync-worker): consume new InboxMemberEvent payload shape Replace MemberAddedPayload (Subscriptions + Room) with InboxMemberEvent (Accounts + RoomName + event-level HistorySharedSince). Collections now fan out by account, synthesize spotlight DocID as {account}_{roomID}, and short-circuit the entire bulk on restricted-room events. Integration tests (inbox_integration_test.go) still reference the old shape — follow-up commit will migrate them. https://claude.ai/code/session_01XTmSpmv5dT6UXX7NpRdYqN * test(search-sync-worker): migrate integration tests to InboxMemberEvent Replace memberFixture + MemberAddedPayload-based helpers with buildInboxMemberEvent / publishInboxMemberEvent. Update DocID assertions to the synthesized {account}_{roomID} scheme. Restricted-room behavior now tested as an all-or-nothing event-level skip (HistorySharedSince != 0), matching the collection logic. https://claude.ai/code/session_01XTmSpmv5dT6UXX7NpRdYqN * fix(search-sync-worker): address CodeRabbit review — NAK update 409, drop invalid token_chars - handler.go: NAK 409 for ActionUpdate (internal version_conflict from concurrent writers means the painless script didn't run; ack would silently drop the update). 409 stays ack-on-success for externally- versioned ActionIndex/ActionDelete. - spotlight.go: drop token_chars from whitespace tokenizer — only valid on ngram/edge_ngram. Sending it would reject the UpsertTemplate call. - pkg/model/event.go: add bson:"timestamp" tag on InboxMemberEvent.Timestamp per the repo-wide "every NATS event struct must have both json + bson tags" rule. - spotlight.go: document intentional json.Marshal error discard. - adapter.go: correct comment — ES _update DOES accept version + version_type=external; we omit them because the painless LWW guard already handles ordering. - inbox_integration_test.go: rename TestSpotlight/UserRoomSyncIntegration to Test<Type>_Integration to follow Test<Type>_<Scenario> convention. - plan doc: mark Phase 2 complete (integration tests landed in c7a303b); add `shell` language hint on command fences. https://claude.ai/code/session_01XTmSpmv5dT6UXX7NpRdYqN * ci: add GitHub Actions workflow for lint + unit + integration tests Three parallel jobs on push to main and every PR: - lint: golangci-lint via golangci-lint-action@v9 (v7+ required for our v2 .golangci.yml config; v6 only understands v1 configs) - test: unit tests with race detector via make test - test-integration: search-sync-worker integration tests via make test-integration SERVICE=search-sync-worker (Docker is pre-installed on ubuntu-latest runners, so testcontainers-go can start ES + NATS) Scoped integration to search-sync-worker only — other services have their own azure-pipelines.yml and running all -tags=integration tests in one job would be slow. Expand to a per-service matrix later if needed. Also drops goimports from .golangci.yml linters.settings — v2.11+ rejects it (additional properties 'goimports' not allowed) because goimports was reclassified as a formatter in v2. The duplicate block under formatters.settings (unchanged) keeps it active with the same local-prefixes. https://claude.ai/code/session_01XTmSpmv5dT6UXX7NpRdYqN * chore(search-sync-worker): address CodeRabbit nits on collection + drain helper - spotlight.go / user_room.go: move the HistorySharedSince short-circuit ahead of the Accounts validation. Previously a restricted-room event with an empty Accounts slice returned an "empty accounts" error instead of being silently skipped like every other restricted event. Matches the event-level contract documented on InboxMemberEvent. - inbox_integration_test.go: surface batch.Error() after draining each fetch so mid-batch server errors (consumer deleted, leader change) fail the test with their real cause instead of a misleading "drained N of M" mismatch. https://claude.ai/code/session_01XTmSpmv5dT6UXX7NpRdYqN --------- Co-authored-by: Claude <noreply@anthropic.com>
1 parent ca9fd33 commit 70f42a0

31 files changed

Lines changed: 3294 additions & 193 deletions

.github/workflows/ci.yml

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
name: ci
2+
3+
# Skip CI for docs-only changes. Note: if you later add required status
4+
# checks, switch to per-job `if:` conditionals since paths-ignore will
5+
# leave the checks in a "pending" state instead of passing them.
6+
on:
7+
push:
8+
branches: [main]
9+
paths-ignore:
10+
- "docs/**"
11+
- "**.md"
12+
pull_request:
13+
paths-ignore:
14+
- "docs/**"
15+
- "**.md"
16+
17+
# Cancel in-progress runs for the same branch when a new commit lands.
18+
concurrency:
19+
group: ${{ github.workflow }}-${{ github.ref }}
20+
cancel-in-progress: true
21+
22+
permissions:
23+
contents: read
24+
25+
env:
26+
# Bumped to 1.25.9 for the April 7, 2026 security release (10 CVEs:
27+
# os symlink, html/template, crypto/x509, crypto/tls, etc.).
28+
# go.mod still declares `go 1.25.8` — 1.25.9 satisfies that constraint;
29+
# Dockerfiles get bumped separately.
30+
GO_VERSION: "1.25.9"
31+
# Pin golangci-lint to a known-good version so a new release can't break
32+
# CI without a config update. Bump deliberately.
33+
GOLANGCI_LINT_VERSION: "v2.11.4"
34+
35+
jobs:
36+
lint:
37+
runs-on: ubuntu-latest
38+
steps:
39+
- uses: actions/checkout@v6
40+
- uses: actions/setup-go@v6
41+
with:
42+
go-version: ${{ env.GO_VERSION }}
43+
cache: true
44+
- uses: golangci/golangci-lint-action@v9
45+
with:
46+
version: ${{ env.GOLANGCI_LINT_VERSION }}
47+
48+
test:
49+
runs-on: ubuntu-latest
50+
steps:
51+
- uses: actions/checkout@v6
52+
- uses: actions/setup-go@v6
53+
with:
54+
go-version: ${{ env.GO_VERSION }}
55+
cache: true
56+
- name: Unit tests (race detector)
57+
run: make test
58+
59+
# Integration tests run the full testcontainers-go stack (Elasticsearch,
60+
# NATS JetStream, MongoDB, Cassandra as applicable per service). Docker is
61+
# pre-installed on ubuntu-latest runners. Bump the timeout if container
62+
# image pulls are slow on cold cache.
63+
test-integration:
64+
runs-on: ubuntu-latest
65+
timeout-minutes: 20
66+
steps:
67+
- uses: actions/checkout@v6
68+
- uses: actions/setup-go@v6
69+
with:
70+
go-version: ${{ env.GO_VERSION }}
71+
cache: true
72+
- name: Integration tests (search-sync-worker)
73+
run: make test-integration SERVICE=search-sync-worker

.golangci.yml

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,6 @@ linters:
1010
- bodyclose
1111
- exhaustive
1212
settings:
13-
goimports:
14-
local-prefixes:
15-
- github.com/hmchangw/chat
1613
exhaustive:
1714
default-signifies-exhaustive: true
1815
gocritic:

0 commit comments

Comments
 (0)