Refactor: Separate transcription concerns into service, domain, and presentation layers#12
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds a separated transcription feature: DeepgramService performs audio transcription; GeminiService (via ChatbotService) generates summary and prescription; TranscriptionModel stores results; TranscriptionController manages recording, transcription, and processing state; TranscriptionScreen provides the UI; main.dart wires the controller via Provider. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant UI as TranscriptionScreen
participant Controller as TranscriptionController
participant Recorder as AudioRecorder
participant Deepgram as DeepgramService
participant Gemini as GeminiService
User->>UI: Tap mic (start)
UI->>Controller: toggleRecording()
Controller->>Recorder: start(recordingPath)
Controller->>Controller: start waveform timer
User->>UI: Tap mic (stop)
UI->>Controller: toggleRecording()
Controller->>Recorder: stop()
Controller->>Controller: set state -> transcribing
Controller->>Deepgram: transcribe(recordingPath)
Deepgram-->>Controller: transcript or "No speech detected"
alt transcript present
Controller->>Gemini: generateSummary(transcript)
Gemini-->>Controller: summary
Controller->>Gemini: generatePrescription(transcript)
Gemini-->>Controller: prescription
end
Controller->>UI: notifyListeners() (state -> done or error)
UI-->>User: display results
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can make CodeRabbit's review stricter and more nitpicky using the `assertive` profile, if that's what you prefer.Change the |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (7)
lib/features/transcription/data/gemini_service.dart (3)
6-17: Consider adding error handling or custom exceptions.Both methods let exceptions propagate directly from
ChatbotService. While the controller does catch these, wrapping them in service-specific exceptions would provide better error context and make debugging easier.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/features/transcription/data/gemini_service.dart` around lines 6 - 17, Wrap calls to _chatbotService.getGeminiResponse in generateSummary and generatePrescription with try/catch and rethrow a service-specific exception (e.g., GeminiServiceException or TranscriptionProcessingException) that includes context (which method failed and the transcription snippet) and preserves the original error (as cause or inner exception); update both generateSummary and generatePrescription to catch any thrown exceptions from _chatbotService.getGeminiResponse, build a descriptive message mentioning the method name (generateSummary/generatePrescription) and the transcription, and throw the custom exception with the original exception attached.
12-17: Unexplained hardcoded delay needs documentation or removal.The 3-second delay before calling
getGeminiResponseis unexplained. If this is for API rate limiting, consider:
- Adding a comment explaining the reason
- Making the delay configurable
- Using a proper rate-limiting/retry mechanism
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/features/transcription/data/gemini_service.dart` around lines 12 - 17, The hardcoded 3-second Future.delayed in generatePrescription makes timing behaviour opaque; remove the literal delay and either (A) document its purpose or (B) make it configurable and/or replaced with a proper rate-limiting/retry mechanism. Specifically, remove the await Future.delayed(const Duration(seconds: 3)); from generatePrescription and instead read a Duration (e.g., prescriptionDelay) injected via the class constructor or a configuration service, add a one-line comment above generatePrescription explaining why any delay exists, and if the delay was intended for API throttling replace it with a rate limiter or retry logic that wraps _chatbotService.getGeminiResponse to handle backoff and limits.
3-4: Consider dependency injection for testability.Instantiating
ChatbotServicedirectly inside the class makes it difficult to mock in unit tests. Consider acceptingChatbotServiceas a constructor parameter with a default value.♻️ Proposed refactor for dependency injection
class GeminiService { - final ChatbotService _chatbotService = ChatbotService(); + final ChatbotService _chatbotService; + + GeminiService({ChatbotService? chatbotService}) + : _chatbotService = chatbotService ?? ChatbotService();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/features/transcription/data/gemini_service.dart` around lines 3 - 4, GeminiService currently constructs a ChatbotService internally which hinders mocking; modify GeminiService to accept a ChatbotService parameter in its constructor (e.g., GeminiService({ChatbotService? chatbotService}) : _chatbotService = chatbotService ?? ChatbotService()) and keep the private field final _chatbotService so callers/tests can inject a mock ChatbotService while defaulting to a real instance when none is provided.lib/features/transcription/presentation/transcription_controller.dart (3)
14-17: Consider dependency injection for services.Instantiating
AudioRecorder,DeepgramService, andGeminiServicedirectly makes unit testing difficult. Consider accepting these as constructor parameters.♻️ Proposed refactor for dependency injection
class TranscriptionController extends ChangeNotifier { - final _audioRecorder = AudioRecorder(); - final _deepgramService = DeepgramService(); - final _geminiService = GeminiService(); + final AudioRecorder _audioRecorder; + final DeepgramService _deepgramService; + final GeminiService _geminiService; + + TranscriptionController({ + AudioRecorder? audioRecorder, + DeepgramService? deepgramService, + GeminiService? geminiService, + }) : _audioRecorder = audioRecorder ?? AudioRecorder(), + _deepgramService = deepgramService ?? DeepgramService(), + _geminiService = geminiService ?? GeminiService();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/features/transcription/presentation/transcription_controller.dart` around lines 14 - 17, The TranscriptionController currently instantiates AudioRecorder, DeepgramService, and GeminiService directly (fields _audioRecorder, _deepgramService, _geminiService); change the class to accept these dependencies via constructor parameters (with optional defaults for production) so tests can inject mocks: add constructor parameters for AudioRecorder, DeepgramService, and GeminiService, assign them to the existing fields, and remove direct new allocations inside the class body so unit tests can pass test doubles into TranscriptionController.
115-128: Consider parallelizing independent Gemini calls.
generateSummaryandgeneratePrescriptionare independent operations that could run concurrently. However, note thatgeneratePrescriptionhas a 3-second delay that may be intentional rate limiting.♻️ Proposed parallel execution (if rate limiting permits)
Future<void> _processWithGemini(String transcript) async { try { - final summary = await _geminiService.generateSummary(transcript); - final prescription = await _geminiService.generatePrescription(transcript); + final results = await Future.wait([ + _geminiService.generateSummary(transcript), + _geminiService.generatePrescription(transcript), + ]); + final summary = results[0]; + final prescription = results[1]; data = data.copyWith(summary: summary, prescription: prescription);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/features/transcription/presentation/transcription_controller.dart` around lines 115 - 128, _processWithGemini currently awaits _geminiService.generateSummary and then _geminiService.generatePrescription sequentially; change it to run those independent calls in parallel (e.g., kick off both Futures and await them together with Future.wait or similar) so you collect summary and prescription concurrently before calling data.copyWith and updating state; keep the same error handling and preserve awareness of the intentional 3-second delay/rate limiting on generatePrescription (add a comment or a configuration gate if you need to keep it serialized in some environments).
130-137: ReuseRandominstance instead of creating new ones.Line 133 creates a new
Random()instance on every timer tick (400 times/second for 40 values). Create a single instance as a field and reuse it.♻️ Proposed fix to reuse Random instance
+ final _random = Random(); + void _startWaveformAnimation() { _waveformTimer = Timer.periodic(const Duration(milliseconds: 100), (_) { for (int i = 0; i < waveformValues.length; i++) { - waveformValues[i] = Random().nextDouble(); + waveformValues[i] = _random.nextDouble(); } notifyListeners(); }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/features/transcription/presentation/transcription_controller.dart` around lines 130 - 137, The _startWaveformAnimation method currently instantiates a new Random() on every tick; add a single Random field (e.g., final Random _rand = Random(); or a late Random _rand initialized in the class constructor) to the transcription controller and replace Random() in _startWaveformAnimation with that field so waveformValues is filled using _rand.nextDouble(); keep existing _waveformTimer, waveformValues updates and notifyListeners intact.lib/features/transcription/presentation/transcription_screen.dart (1)
61-61: Consider usingColor.withValues()instead ofwithOpacity().
withOpacity()is less performant as it creates new Color objects. Flutter now recommendswithValues()for specifying opacity. This applies to Lines 61, 83, 202, and 203.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/features/transcription/presentation/transcription_screen.dart` at line 61, Replace uses of Colors.white.withOpacity(...) in TranscriptionScreen widget with the newer withValues(...) API to avoid creating extra Color objects; locate the occurrences inside transcription_screen.dart (the expressions using Colors.white.withOpacity at the places referenced) and change them to Colors.white.withValues(opacity: <same value>) (and do the same for the other two occurrences mentioned) so the visual opacity remains identical but uses the recommended withValues call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/features/transcription/data/deepgram_service.dart`:
- Around line 18-25: The HTTP POST in DeepgramService that assigns to the
variable response uses http.post(...) without a timeout; update that call (the
http.post in the method that creates response) to apply a timeout (e.g.,
.timeout(const Duration(seconds: 30))) or switch to an http.Client with a
configured timeout, and handle TimeoutException by catching it and
logging/throwing a clear error. Ensure you modify the call that sets response
and add appropriate error handling so the method (DeepgramService) fails fast on
network stalls.
- Around line 27-33: The code decodes Deepgram's JSON and directly accesses
nested keys
(decodedResponse['results']['channels'][0]['alternatives'][0]['transcript']),
which can throw on malformed responses; update the parsing in the Deepgram
response handling (the block that produces decodedResponse and result) to
validate types and presence at each level (check decodedResponse is a Map, that
'results' and 'channels' exist and are Lists, that channels[0] has
'alternatives' which is a List, and that alternatives[0] contains 'transcript'),
use null-safe lookups and fallback values (e.g., return 'No speech detected'
when fields are missing/empty), and wrap decoding in a try/catch to throw a
clearer Exception mentioning the problematic payload or include the
response.body for debugging while preserving concise error text.
- Around line 6-7: The class DeepgramService currently defaults _apiKey to an
empty string which can cause silent failures; update DeepgramService to validate
the API key at construction (e.g., in the DeepgramService constructor) or at the
start of any public method that uses _apiKey (such as transcription/request
methods), and throw a clear exception or return an error if
dotenv.env['DEEPGRAM_API_KEY'] is null/empty; reference the _apiKey field and
the DeepgramService constructor or its public methods so callers get an explicit
failure instead of silent authentication issues.
In `@lib/features/transcription/presentation/transcription_controller.dart`:
- Around line 37-39: requestPermissions currently just requests
Permission.microphone without checking the result; change it to capture the
PermissionStatus from Permission.microphone.request(), return a boolean
(granted) or throw/return false on denial, and handle
PermissionStatus.permanentlyDenied by offering to open app settings. Update the
caller that invokes requestPermissions (the code that currently calls
requestPermissions() before starting recording) to check the returned status
and: if false show a user-visible error/permission dialog, and if
permanentlyDenied prompt to open app settings via openAppSettings(). Ensure you
reference and modify requestPermissions and the caller that starts recording so
the flow handles granted, denied, and permanentlyDenied cases explicitly.
---
Nitpick comments:
In `@lib/features/transcription/data/gemini_service.dart`:
- Around line 6-17: Wrap calls to _chatbotService.getGeminiResponse in
generateSummary and generatePrescription with try/catch and rethrow a
service-specific exception (e.g., GeminiServiceException or
TranscriptionProcessingException) that includes context (which method failed and
the transcription snippet) and preserves the original error (as cause or inner
exception); update both generateSummary and generatePrescription to catch any
thrown exceptions from _chatbotService.getGeminiResponse, build a descriptive
message mentioning the method name (generateSummary/generatePrescription) and
the transcription, and throw the custom exception with the original exception
attached.
- Around line 12-17: The hardcoded 3-second Future.delayed in
generatePrescription makes timing behaviour opaque; remove the literal delay and
either (A) document its purpose or (B) make it configurable and/or replaced with
a proper rate-limiting/retry mechanism. Specifically, remove the await
Future.delayed(const Duration(seconds: 3)); from generatePrescription and
instead read a Duration (e.g., prescriptionDelay) injected via the class
constructor or a configuration service, add a one-line comment above
generatePrescription explaining why any delay exists, and if the delay was
intended for API throttling replace it with a rate limiter or retry logic that
wraps _chatbotService.getGeminiResponse to handle backoff and limits.
- Around line 3-4: GeminiService currently constructs a ChatbotService
internally which hinders mocking; modify GeminiService to accept a
ChatbotService parameter in its constructor (e.g.,
GeminiService({ChatbotService? chatbotService}) : _chatbotService =
chatbotService ?? ChatbotService()) and keep the private field final
_chatbotService so callers/tests can inject a mock ChatbotService while
defaulting to a real instance when none is provided.
In `@lib/features/transcription/presentation/transcription_controller.dart`:
- Around line 14-17: The TranscriptionController currently instantiates
AudioRecorder, DeepgramService, and GeminiService directly (fields
_audioRecorder, _deepgramService, _geminiService); change the class to accept
these dependencies via constructor parameters (with optional defaults for
production) so tests can inject mocks: add constructor parameters for
AudioRecorder, DeepgramService, and GeminiService, assign them to the existing
fields, and remove direct new allocations inside the class body so unit tests
can pass test doubles into TranscriptionController.
- Around line 115-128: _processWithGemini currently awaits
_geminiService.generateSummary and then _geminiService.generatePrescription
sequentially; change it to run those independent calls in parallel (e.g., kick
off both Futures and await them together with Future.wait or similar) so you
collect summary and prescription concurrently before calling data.copyWith and
updating state; keep the same error handling and preserve awareness of the
intentional 3-second delay/rate limiting on generatePrescription (add a comment
or a configuration gate if you need to keep it serialized in some environments).
- Around line 130-137: The _startWaveformAnimation method currently instantiates
a new Random() on every tick; add a single Random field (e.g., final Random
_rand = Random(); or a late Random _rand initialized in the class constructor)
to the transcription controller and replace Random() in _startWaveformAnimation
with that field so waveformValues is filled using _rand.nextDouble(); keep
existing _waveformTimer, waveformValues updates and notifyListeners intact.
In `@lib/features/transcription/presentation/transcription_screen.dart`:
- Line 61: Replace uses of Colors.white.withOpacity(...) in TranscriptionScreen
widget with the newer withValues(...) API to avoid creating extra Color objects;
locate the occurrences inside transcription_screen.dart (the expressions using
Colors.white.withOpacity at the places referenced) and change them to
Colors.white.withValues(opacity: <same value>) (and do the same for the other
two occurrences mentioned) so the visual opacity remains identical but uses the
recommended withValues call.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
lib/features/transcription/data/deepgram_service.dartlib/features/transcription/data/gemini_service.dartlib/features/transcription/domain/transcription_model.dartlib/features/transcription/presentation/transcription_controller.dartlib/features/transcription/presentation/transcription_screen.dartlib/main.dartpubspec.yaml
lib/features/transcription/presentation/transcription_controller.dart
Outdated
Show resolved
Hide resolved
|
Hey @jddeep @SharkyBytes can you review this PR and merge it ? so it avoid any further merge conflicts in the future ! |
|
Resolve the coderabbit suggestions |
…epgramService and TranscriptionController
… initialization crashes at startup.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/services/chatbot_service.dart (1)
65-67:⚠️ Potential issue | 🟡 MinorException message is misleading for non-network failures.
At
Line 67, the catch message says "Could not connect to API" for all exceptions, including decode/shape errors. Please use a neutral failure message.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/services/chatbot_service.dart` around lines 65 - 67, The catch block currently logs and returns "Could not connect to API" for all exceptions (see the catch in chatbot_service.dart where print('Exception: $e') and return "Error: Could not connect to API: $e" are used); change the returned error text to a neutral message (e.g., "Error: Operation failed" or "Error: An unexpected error occurred") while still including the exception details for debugging, and ensure the printed log remains print('Exception: $e') or similar so non-network errors (decode/shape/etc.) aren't mischaracterized as connection failures.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/services/chatbot_service.dart`:
- Around line 26-27: Remove the raw print statements that dump full prompts,
responses and API bodies in lib/services/chatbot_service.dart (replace the
direct print calls that output prompt/response/raw API body); instead redact or
mask sensitive content and log only non-sensitive metadata (e.g., message
length, truncated hash, or "REDACTED" placeholders) via your app logger (locate
the prints in the method that sends data to Gemini / handles responses—the calls
printing '\n=== GEMINI PROMPT ===', the full prompt, full output, and the raw
API body) so no PHI/PII is written to logs while still retaining enough trace
info for debugging.
---
Outside diff comments:
In `@lib/services/chatbot_service.dart`:
- Around line 65-67: The catch block currently logs and returns "Could not
connect to API" for all exceptions (see the catch in chatbot_service.dart where
print('Exception: $e') and return "Error: Could not connect to API: $e" are
used); change the returned error text to a neutral message (e.g., "Error:
Operation failed" or "Error: An unexpected error occurred") while still
including the exception details for debugging, and ensure the printed log
remains print('Exception: $e') or similar so non-network errors
(decode/shape/etc.) aren't mischaracterized as connection failures.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d41224d8-ea84-4303-b3b4-a8f8d9d1ae0d
📒 Files selected for processing (3)
lib/features/transcription/data/deepgram_service.dartlib/features/transcription/presentation/transcription_controller.dartlib/services/chatbot_service.dart
🚧 Files skipped from review as they are similar to previous changes (1)
- lib/features/transcription/presentation/transcription_controller.dart
|
@SharkyBytes I have did the necessary improvements suggested by Coderabbit , review it and happy to know your feedback :) |
|
LGTM! |
Addressed Issues:
Fixes #11
Description
Moved all business logic out of TranscriptionScreen which was previously handling recording, HTTP calls, Gemini prompts, and UI all in one file. This PR separates that into proper layers. No behavior changes, app works identically, code is just properly organized now.
New files created:
Final structure:
lib/
├── main.dart
├── features/
│ └── transcription/
│ ├── data/
│ ├── domain/
│ └── presentation/
├── screens/
└── services/
Checklist
AI Usage Disclosure
Check one of the checkboxes below:
I have used the following AI models and tools: Claude for some minor changes.
I reviewed every file, understood each change, and tested the full recording → transcription → Gemini flow locally before submitting.
Summary by CodeRabbit
New Features
Chores