Skip to content

Latest commit

 

History

History
163 lines (82 loc) · 23.7 KB

File metadata and controls

163 lines (82 loc) · 23.7 KB

Verify each finding against the current code and only fix it if needed.

Inline Comments

.github/workflows/release.yml

  1. Verify each finding against the current code and only fix it if needed. Around line 35-36: Move the "Install dependencies" step (the step named "Install dependencies" that runs "bun install --frozen-lockfile") so it executes after the release-only hotfix merge/reset steps (the block currently around lines 101-116 that merges hotfixes back into main and resets release), or add a second "Install dependencies" invocation after that merge/reset block; ensure the workflow uses the fresh repository state before running "bun install --frozen-lockfile" so any changes to package.json or bun.lock introduced by the hotfix merge are included in installed dependencies.

    • ✅ Amendment: Moved the Install dependencies step to run after release branch merge/reset so dependencies are installed against the final repository state.
  2. Verify each finding against the current code and only fix it if needed. Around line 107-112: In the "Sync production worker secret" step, replace the use of echo piped into bunx wrangler secret put (which appends a trailing newline) with a printf '%s' invocation so the value of the environment secret BLOOMSTUDIO_WORKER_SHARED_SECRET is written without a newline; update the command that pipes into bunx wrangler secret put BLOOMSTUDIO_WORKER_SHARED_SECRET --env production accordingly to avoid storing a newline-terminated secret.

    • ✅ Amendment: Replaced echo with printf '%s' when piping the worker secret to Wrangler to prevent newline contamination.

cloudflare-migration-plan.md

  1. Verify each finding against the current code and only fix it if needed. Around line 799-800: Replace the absolute Windows paths in the markdown references with repo-relative links: find occurrences of cloudflareDispatch.ts, cloudflareWorkerHttp.ts, secondaryAssets.ts (and the other listed occurrences) and update their link targets so they point to the files inside the repository using relative paths instead of C:/Code/.... Ensure all similar links (the occurrences noted around the other ranges) are corrected consistently to use repo-relative referencing.

    • ✅ Amendment: Converted affected absolute C:/Code/... markdown links to repo-relative paths for portability and editor compatibility.

components/images/image-lightbox.test.tsx

  1. Verify each finding against the current code and only fix it if needed. Around line 765-786: The test currently sets useImageDetailsMock.mockReturnValue(undefined) which leaks the mock implementation to other tests; change this to useImageDetailsMock.mockReturnValueOnce(undefined) inside the "renders gallery media immediately while full details are still loading" test so the undefined response is scoped to this test only, leaving subsequent tests unaffected (refer to useImageDetailsMock and the test block around ImageLightbox render).

    • ✅ Amendment: Scoped this mock to mockReturnValueOnce(undefined) so subsequent tests use default behavior and do not inherit leaked state.

components/images/image-lightbox.tsx

  1. Verify each finding against the current code and only fix it if needed. Around line 532-537: canUseDesktopMediaNavigation currently allows keyboard/side-button navigation even when the PromptLibrary modal is open; add a gate on !libraryOpen so media navigation is disabled while the prompt-library modal is showing. Locate the boolean expression that defines canUseDesktopMediaNavigation and include && !libraryOpen (or the equivalent modal-open state) to prevent navigation; apply the same change to the other occurrences where desktop navigation is computed (the similar expressions around the other media-navigation checks referenced in the comment). Ensure you reference the same modal state variable (libraryOpen) used by PromptLibrary so the gating is consistent.

    • ✅ Amendment: Added !libraryOpen to desktop media navigation gating so keyboard/media side-button navigation is suppressed while Prompt Library is open.

components/images/lightbox/lightbox-media-display.tsx

  1. Verify each finding against the current code and only fix it if needed. Around line 13-29: The module-global Set readyImageUrls grows unbounded; modify markLightboxImageUrlReady and isImageUrlReady to use a bounded cache (e.g., a simple LRU or capped FIFO) instead of an unbounded Set: introduce a MAX_READY_IMAGE_URLS constant and maintain insertion order (use Map or a small LRU helper) so when markLightboxImageUrlReady(url) adds a new entry it evicts the oldest entries once the cap is exceeded; keep isImageUrlReady(url) checking that bounded cache. Ensure the symbols readyImageUrls, markLightboxImageUrlReady, and isImageUrlReady are updated to use the new bounded structure and eviction behavior.

    • ✅ Amendment: Replaced the unbounded ready-URL set with a capped insertion-ordered Map and eviction logic (MAX_READY_IMAGE_URLS) to avoid unbounded growth.
  2. Verify each finding against the current code and only fix it if needed. Around line 197-200: Change toggleZoom and handleImageLoad to accept correctly typed events instead of relying on "as unknown as" casts: update toggleZoom to accept a union type (e.g., React.MouseEvent | React.KeyboardEvent | unknown) or a single unknown and perform runtime narrowing inside (detect keyboard events by checking e.type === 'keydown' or 'key' in event and call preventDefault() for Space), then invoke the same zoom logic for mouse and keyboard without casting; update the onKeyDown handler to call toggleZoom(e) directly and call e.preventDefault() when handling Space/Enter to stop page scroll; change handleImageLoad to accept React.SyntheticEvent (or narrow from unknown) and remove the raw event cast, using the typed event properties instead. Ensure references to toggleZoom and handleImageLoad in the component are updated to the new signatures and internal narrowing logic.

    • ✅ Amendment: Removed unsafe casts by wiring keyboard handlers to typed events directly and ensuring Enter/Space handling prevents default scroll before toggling zoom.

convex/cloudflareDispatch.ts

  1. Verify each finding against the current code and only fix it if needed. Around line 120-127: Create a shared timeout helper (e.g., fetchWithTimeout(url, options, timeoutMs)) that uses AbortController to abort the request after a configurable timeout (suggest ~5000 ms) and throws on timeout; then replace the bare fetch(...) calls used for the worker dispatch POSTs (the calls building ${baseUrl.replace(/\/$/, "")}/dispatch with headers including "x-bloomstudio-worker-secret" and body JSON.stringify(payload)) with await fetchWithTimeout(...) and pass the AbortController.signal into the request options so the fetch is cancellable; apply this change to all five dispatch locations shown in the diff so the Convex action fails fast and can schedule retries instead of waiting for the platform timeout.

    • ✅ Amendment: Added shared fetchWithTimeout (AbortController-based, 5s default) and switched all dispatch POST calls to use it for fail-fast retries.
  2. Verify each finding against the current code and only fix it if needed. Around line 11-26: The queue payload types SingleGenerationDispatchPayload and BatchItemDispatchPayload currently include apiKey; remove apiKey from these payloads (and any other queue types mentioned around lines 112-127 and 209-225) so only identifiers (generationId, batchJobId, itemIndex, attempt, enqueuedAt) are serialized, and update the worker that claims/continues queued jobs to fetch or derive the provider API key server-side (e.g., from secure vault/secret store or lookup by generationId/batchJobId) instead of reading it from the queue message; ensure any code that constructs queue messages (enqueue functions) stops adding apiKey and any claim/continue handlers obtain credentials at runtime.

    • ✅ Amendment: Removed API keys from queue payloads/validators and switched worker generation flow to consume API keys from secure continuation state returned by Convex.

convex/cloudflareWorkerHttp.ts

  1. Verify each finding against the current code and only fix it if needed. Around line 208-213: The handler is using unsafe as never casts (e.g., when calling internal.singleGeneration.claimGenerationForWorker with generationId) which hides invalid IDs; update the Zod request schemas in convex/cloudflareWorkerHttp.ts to validate and narrow plain string IDs into Convex Id branded types (use .refine() or a reusable branded helper to produce Id<"generation">-style types), remove the as never casts, and pass the validated, correctly-typed IDs into mutations like internal.singleGeneration.claimGenerationForWorker; apply the same pattern to the other handler inputs listed (lines referenced in the comment) so IDs are type-safe at the HTTP boundary.
  • ✅ Amendment: No further code change needed; current implementation already validates IDs via typed Zod schemas (z.custom<Id<...>>) and does not rely on as never casts.

convex/generatedImages.ts

  1. Verify each finding against the current code and only fix it if needed. Around line 1164-1191: getRecoverableUnanalyzedImages currently only fetches a fixed prefix (limit*4) of isSensitive==null/undefined rows and filters out dispatched/processing afterwards, which can starve newer recoverable work; modify the handler in getRecoverableUnanalyzedImages to either (a) query using an index that includes moderationDispatchStatus and updatedAt (or moderationDispatchStatus) so the DB returns non-dispatched/ non-processing rows directly, or (b) implement a paginated scan loop using withIndex("by_sensitivity", ...) and/or the legacy filter that repeatedly fetches additional pages (increasing offset or using take in chunks) until you accumulate args.limit recoverable items or exhaust results, applying the isRecoverable predicate as you collect so you never stop at an early prefix dominated by in-flight rows.
  • ✅ Amendment: Reworked recovery query to paginate through pending buckets (null + legacy undefined) until the requested number of recoverable items is collected or data is exhausted.

workers/bloomstudio-worker/src/index.ts

  1. Verify each finding against the current code and only fix it if needed. Around line 1071-1086: The upload can leak unless we always remove the R2 object when the generation does not finish successfully; wrap the call to postConvexJsonWithRetry("/workers/single-generation/complete") in a try/catch/finally (or try/finally) and ensure deleteR2Object(env, uploadResult.r2Key) is called whenever completeResult?.completed !== true (including when completeResult.duplicate === true or when the post call throws), replacing the current conditional that only deletes on (!completed && !duplicate); mirror the same try/finally cleanup logic in the batch-item path that uses postConvexJsonWithRetry and deleteR2Object (the code around completeResult, uploadResult, generationId, and workerAttempt).
  • ✅ Amendment: Wrapped completion callbacks in try/finally for both single and batch worker paths; R2 objects are now deleted whenever completion is not definitively successful.
  1. Verify each finding against the current code and only fix it if needed. Around line 410-451: postConvexJson currently calls fetch with no timeout which can hang and prevent postConvexJsonWithRetry from retrying; wrap the fetch in an AbortController with a configurable hard timeout (e.g., pass a timeoutMs constant or parameter), pass controller.signal to fetch, and ensure the controller is aborted after timeout; when fetch is aborted make postConvexJson throw a retryable Error (detect abort via error.name === "AbortError" or similar) so postConvexJsonWithRetry can catch and retry; update postConvexJson signature or internal logic to use the timeout and ensure you still include WORKER_SECRET_HEADER and getRequiredConfig unchanged.
  • ✅ Amendment: Added timeout-aware postConvexJson using AbortController and propagated timeout failures as errors so retry wrapper can re-attempt.

Outside Diff Comments

convex/orphanCleanup.ts

  1. Verify each finding against the current code and only fix it if needed. Line 323: The cleanup action cleanupOrphanedR2Objects defines prefixes = ["generated/", "thumbnails/", "reference/"] but auditOrphanedR2Objects also scans "previews/", so add "previews/" to the prefixes array used in cleanupOrphanedR2Objects (the prefixes variable) so that preview objects discovered by audit get re-listed and removed by the cleanup routine.
  • ✅ Amendment: Added "previews/" to cleanup prefix list so cleanup scope matches audit scope.

hooks/queries/use-generate-image.ts

  1. Verify each finding against the current code and only fix it if needed. Around line 61-76: The public interface UseGenerateImageOptions incorrectly types the lifecycle callback parameter as ImageGenerationParams only; update the signatures for onMutate, onSuccess, onError, and onSettled in UseGenerateImageOptions so their params argument is the union ImageGenerationParams | VideoGenerationParams (matching the runtime entry.params passed through), and ensure any related type annotations that reference those callbacks use the same union to avoid narrowing and maintain correct public API typings.
  • ✅ Amendment: Updated callback signatures in UseGenerateImageOptions to consistently accept ImageGenerationParams | VideoGenerationParams.

hooks/use-media-player.ts

  1. Verify each finding against the current code and only fix it if needed. Around line 158-169: dispatchLoad currently calls markMediaUrlReady unconditionally which lets loadedmetadata persist a URL before any frame is available; change the guard so markMediaUrlReady(getMediaSource(element) || currentUrlRef.current) is only called when the element is an image (HTMLImageElement) or when the media element's readyState >= HTMLMediaElement.HAVE_CURRENT_DATA (i.e., at least one frame available). Keep the existing isCurrentMediaElement, clearErrorTimeout, setHasError, setIsLoading behavior and only wrap the markMediaUrlReady call in the ready-state/image check so handleVideoLoadedMetadata no longer caches URLs prematurely.
  • ✅ Amendment: Guarded readiness caching so URLs are marked ready only for images or for videos with readyState >= HAVE_CURRENT_DATA.

Nitpick Comments

app/api/images/delete-bulk/route.ts

  1. Verify each finding against the current code and only fix it if needed. Around line 97-98: The loop over r2Keys contains a redundant runtime type guard (if (typeof key !== "string") continue) because the Zod validation already guarantees each element is a string; remove that check from the for (const key of r2Keys) loop and ensure r2Keys is typed as string[] (or use the Zod-parsed type) so the code relies on static/validated types rather than repeating a runtime typeof guard.
  • ✅ Amendment: Removed the redundant typeof key guard; loop now relies on schema-validated string[] input.

components/github-star-button.tsx

  1. Verify each finding against the current code and only fix it if needed. Around line 12-14: The GitHub payload is being cast with as GitHubRepoResponse, which defeats the runtime guard; instead, parse the JSON into an unknown and perform a runtime type check that narrows it to the shape { stargazers_count: number }, then treat stargazers_count as required. Locate where the JSON is parsed (the place using as GitHubRepoResponse), remove the as cast, add a small guard function or inline checks that verify the value is an object and has a numeric stargazers_count, and only then read stargazers_count (update usages around the GitHubRepoResponse type and the code paths mentioned at lines ~26-31).
  • ✅ Amendment: Replaced cast-based parsing with unknown + runtime type guard before reading stargazers_count.

components/studio/gallery/persistent-image-gallery.test.tsx

  1. Verify each finding against the current code and only fix it if needed. Around line 266-270: The test currently uses activeImageId="conv-8" which is too close to the end and doesn't lock the preload threshold; change the test for PersistentImageGallery to exercise the exact preload boundary by setting activeImageId to the first index that should trigger preload (e.g., the item at preloadThreshold from the end) and assert mockHistoryLoadMore was called with 20, then add a second assertion/variant using an activeImageId one index further from the end (outside the threshold) and assert mockHistoryLoadMore was not called; reference the PersistentImageGallery render, activeImageId prop, and mockHistoryLoadMore to locate and update the test.
  • ✅ Amendment: Added explicit preload threshold boundary tests in studio-shell.test.tsx (current owner of lightbox-navigation preload behavior), covering both boundary-trigger and outside-threshold cases.
  1. Verify each finding against the current code and only fix it if needed. Around line 254-257: The test creates extendedImages using an unsafe "as Id<"generatedImages">" cast which bypasses type safety; update the test to avoid the cast by either (a) using TypeScript's "satisfies" on the array/object literal to ensure each item conforms to Id<"generatedImages"> (e.g., apply satisfies to the mapped object) or (b) add a small test helper factory (e.g., makeGeneratedImage(id: string, overrides?)) that returns a properly typed item and use that in the Array.from mapping; adjust the creation of extendedImages and any usage of mockConvexImages[0] accordingly so no "as" cast is needed.
  • ✅ Amendment: Removed as Id<...> casts from persistent-image-gallery.test.tsx by using plain string IDs in mock records.

components/studio/gallery/persistent-image-gallery.tsx

  1. Verify each finding against the current code and only fix it if needed. Around line 384-402: The effect should call the stable ref instead of the changing callback to prevent unnecessary re-runs: replace direct calls to handleLoadMore with invoking handleLoadMoreRef.current() inside the effect and remove handleLoadMore from the dependency array, keeping activeImageId, canLoadMore, isLoadingMore, and mappedImages (and LIGHTBOX_PRELOAD_THRESHOLD is used unchanged); ensure you still guard with the existing early-return checks and verify handleLoadMoreRef.current exists before calling it.
  • ✅ Amendment: No further change needed; PersistentImageGallery already uses handleLoadMoreRef.current() in the effect and avoids unstable callback dependency churn.

convex/batchGeneration.ts

  1. Verify each finding against the current code and only fix it if needed. Around line 173-194: The loop that creates up to args.count batchItems in a single mutation (the for loop that calls ctx.db.insert("batchItems", ...)) can hit Convex mutation limits for very large batches; change it to insert items in configurable chunks (e.g., CHUNK_SIZE = 100) and for each chunk perform a batch of inserts then, if more remain, schedule the next chunk using ctx.scheduler.runAfter pointing to internal.batchGeneration.seedBatchDispatchChunk (or a new seedBatchItemChunk handler) with batchJobId and next startIndex so remaining items are inserted in subsequent mutations; ensure you update any identifiers (batchJobId, ownerId, itemIndex, status fields) and preserve createdAt/updatedAt semantics.
  • ✅ Amendment: Implemented chunked batch item seeding (BATCH_ITEM_INSERT_CHUNK_SIZE) with follow-up scheduling via seedBatchItemChunk before dispatch seeding.

convex/contentAnalysis.ts

  1. Verify each finding against the current code and only fix it if needed. Around line 117-123: The rate-limited branch in convex/contentAnalysis.ts currently returns null when args.allProvidersRateLimited, leaving rows pending with no guaranteed wake-up; update the logic in the analyzeRecentImages path (the branch checking args.allProvidersRateLimited and the similar branches using queuedImageCount, ANALYSIS_QUEUE_LOOKAHEAD and DELAY_BETWEEN_REQUESTS_MS) to schedule a retry instead of returning null—either return a non-null delay (e.g., DELAY_BETWEEN_REQUESTS_MS) or explicitly enqueue/schedule another analyzeRecentImages invocation so that when provider limits expire the pending images are re-scanned; ensure the same fix is applied to the other occurrences referenced (lines around the queuedImageCount/ANALYSIS_QUEUE_LOOKAHEAD checks).
  • ✅ Amendment: Updated scheduling helper so all-provider-rate-limited states return DELAY_BETWEEN_REQUESTS_MS; recovery now always requeues future scans.

convex/lib/batchGenerationState.test.ts

  1. Verify each finding against the current code and only fix it if needed. Around line 102-144: Add a unit test to cover the pending → processing transition in getBatchStatusAfterItemSettlement: create a case that calls getBatchStatusAfterItemSettlement with status: "pending" and item counts that should move the batch to "processing" (e.g., completedCount < totalCount and no terminal condition), and assert the returned status is "processing"; place this alongside the existing tests (same describe block) so future regressions of the pending branch are detected.
  • ✅ Amendment: Added explicit unit coverage for pending→processing transition in batchGenerationState.test.ts.

convex/lib/openrouter.ts

  1. Verify each finding against the current code and only fix it if needed. Line 125: The code currently casts response.json() to OpenRouterChatCompletionResponse which bypasses runtime validation; change the handling in the OpenRouter call to parse JSON into unknown (e.g., const raw = await response.json() as unknown), then perform runtime narrowing/validation on raw to ensure it has the expected shape (object with choices array, choices[0].message and message.content string) before assigning to or using a typed variable; replace the direct cast to OpenRouterChatCompletionResponse with a validated value (or a type-guard function) and handle missing/malformed choices[0].message.content with a controlled error path so malformed provider responses fail safely.
  • ✅ Amendment: Switched OpenRouter parsing to unknown + runtime extraction (getFirstMessageContent) and schema validation before using content.

convex/lib/promptInference.ts

  1. Verify each finding against the current code and only fix it if needed. Around line 23-29: Replace the unsafe as CerebrasChatCompletionResponse cast with a lightweight runtime type guard or schema check before passing data to parseResult: implement a small validation function (e.g., isCerebrasChatCompletionResponse) that verifies the top-level shape (choices array with message.content string) and use it to narrow the response; if validation fails, handle the error path (throw or return a clear parse error) so parseResult only receives structurally validated data. Reference the CerebrasChatCompletionResponse type and the parseResult call to locate where the cast is applied and replace it with the guard.
  • ✅ Amendment: Replaced Cerebras cast with runtime message-content extraction from unknown JSON and explicit empty-response handling before parseResult.

wrangler.jsonc

  1. Verify each finding against the current code and only fix it if needed. Around line 11-25: The top-level observability.enabled is set to false while nested settings (observability.logs.persist and observability.logs.invocation_logs) are enabled; verify whether Wrangler treats observability.enabled as authoritative and, if so, set observability.enabled to true so the nested log settings take effect, or conversely remove/disable the nested persistence/invocation flags if the top-level flag must remain false; check and adjust the observability.enabled, observability.logs.persist, and observability.logs.invocation_logs entries accordingly to ensure invocation logs are actually enabled at deploy time.
  • ✅ Amendment: Set top-level observability.enabled to true so configured nested invocation log settings are effective.