Skip to content

Refactor: Separate transcription concerns into service, domain, and presentation layers#12

Merged
SharkyBytes merged 9 commits intoAOSSIE-Org:mainfrom
hasanravda:hasan/refactor-structure
Mar 21, 2026
Merged

Refactor: Separate transcription concerns into service, domain, and presentation layers#12
SharkyBytes merged 9 commits intoAOSSIE-Org:mainfrom
hasanravda:hasan/refactor-structure

Conversation

@hasanravda
Copy link
Copy Markdown
Contributor

@hasanravda hasanravda commented Feb 26, 2026

Addressed Issues:

Fixes #11

Description

Moved all business logic out of TranscriptionScreen which was previously handling recording, HTTP calls, Gemini prompts, and UI all in one file. This PR separates that into proper layers. No behavior changes, app works identically, code is just properly organized now.

New files created:

  • Deepgram service - isolates the entire Deepgram HTTP call. Takes audio file path, returns transcript string. Nothing else.
  • Gemini service - owns both Gemini prompts. Summary and prescription generation live here instead of being hardcoded inside the UI file.
  • Transcription model - typed data class grouping raw transcript, summary, and prescription. Replaces loose string variables scattered across screen state.
  • Transcription controller - ChangeNotifier that owns the full flow. Handles permissions, recording start/stop, Deepgram call, Gemini call, and state updates. UI just calls toggleRecording().

Final structure:
lib/
├── main.dart
├── features/
│ └── transcription/
│ ├── data/
│ ├── domain/
│ └── presentation/
├── screens/
└── services/

Checklist

  • My PR addresses a single issue, fixes a single bug or makes a single improvement.
  • My code follows the project's code style and conventions
  • My changes generate no new warnings or errors
  • I have joined the Discord server and I will share a link to this PR with the project maintainers there
  • I have read the Contribution Guidelines
  • Once I submit my PR, CodeRabbit AI will automatically review it and I will address CodeRabbit's comments.

AI Usage Disclosure

Check one of the checkboxes below:

  • This PR contains AI-generated code. I have tested the code locally and I am responsible for it.

I have used the following AI models and tools: Claude for some minor changes.

I reviewed every file, understood each change, and tested the full recording → transcription → Gemini flow locally before submitting.

Summary by CodeRabbit

  • New Features

    • One-tap audio recording with real-time waveform and status indicators
    • Automatic cloud transcription of recordings
    • AI-generated summary and prescription outputs from transcriptions
    • Navigation to view Transcription, Summary, and Prescription screens
  • Chores

    • Transcription UI moved to provider-backed controller for centralized state management
    • Added provider dependency
    • Improved permission prompts, lifecycle states, and error/status reporting

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 26, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 13e04d06-ac51-42e7-b46b-e796e53df587

📥 Commits

Reviewing files that changed from the base of the PR and between 8bf74da and 9d3d272.

📒 Files selected for processing (1)
  • lib/services/chatbot_service.dart
🚧 Files skipped from review as they are similar to previous changes (1)
  • lib/services/chatbot_service.dart

📝 Walkthrough

Walkthrough

Adds a separated transcription feature: DeepgramService performs audio transcription; GeminiService (via ChatbotService) generates summary and prescription; TranscriptionModel stores results; TranscriptionController manages recording, transcription, and processing state; TranscriptionScreen provides the UI; main.dart wires the controller via Provider.

Changes

Cohort / File(s) Summary
Service Layer
lib/features/transcription/data/deepgram_service.dart, lib/features/transcription/data/gemini_service.dart, lib/services/chatbot_service.dart
Added DeepgramService to POST audio to Deepgram and parse transcripts; added GeminiService that delegates summary/prescription generation to ChatbotService; ChatbotService updated to accept optional API key, resolve GEMINI key at call time, switch to gemini-2.5-flash, and replace prints with developer.log.
Data Model
lib/features/transcription/domain/transcription_model.dart
Added immutable TranscriptionModel with rawTranscript, summary, prescription, a const constructor, and copyWith.
Business Logic / State
lib/features/transcription/presentation/transcription_controller.dart
Added TranscriptionController (extends ChangeNotifier) and TranscriptionState enum; handles permissions, recording lifecycle, waveform updates, calls DeepgramService and GeminiService, and manages error states.
Presentation
lib/features/transcription/presentation/transcription_screen.dart
New TranscriptionScreen UI consuming TranscriptionController via Provider: gradient layout, waveform, record toggle, status indicator, and navigation to detail screens.
App Bootstrap & Config
lib/main.dart, pubspec.yaml
Refactored main.dart to provide TranscriptionController via ChangeNotifierProvider and import the new screen/controller; added provider dependency in pubspec.yaml.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant UI as TranscriptionScreen
    participant Controller as TranscriptionController
    participant Recorder as AudioRecorder
    participant Deepgram as DeepgramService
    participant Gemini as GeminiService

    User->>UI: Tap mic (start)
    UI->>Controller: toggleRecording()
    Controller->>Recorder: start(recordingPath)
    Controller->>Controller: start waveform timer

    User->>UI: Tap mic (stop)
    UI->>Controller: toggleRecording()
    Controller->>Recorder: stop()
    Controller->>Controller: set state -> transcribing
    Controller->>Deepgram: transcribe(recordingPath)
    Deepgram-->>Controller: transcript or "No speech detected"

    alt transcript present
        Controller->>Gemini: generateSummary(transcript)
        Gemini-->>Controller: summary
        Controller->>Gemini: generatePrescription(transcript)
        Gemini-->>Controller: prescription
    end

    Controller->>UI: notifyListeners() (state -> done or error)
    UI-->>User: display results
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐇 I hopped out of main and cleared the way,
Deepgram listens while Gemini has its say.
Models tucked neat, controllers in flight,
UI hums softly — everything right.
A tiny rabbit cheers this tidy sight 🥕✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main refactoring objective: separating transcription concerns into distinct architectural layers (service, domain, presentation).
Linked Issues check ✅ Passed The PR fully implements the coding requirements from issue #11: Deepgram transcription service created, Gemini processing service added, state management moved to ChangeNotifier controller, and UI separated into presentation layer.
Out of Scope Changes check ✅ Passed All changes align with issue #11 objectives. The ChatbotService API key parameter addition and logging changes support the Gemini service integration; provider dependency addition enables the controller pattern.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can make CodeRabbit's review stricter and more nitpicky using the `assertive` profile, if that's what you prefer.

Change the reviews.profile setting to assertive to make CodeRabbit's nitpick more issues in your PRs.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (7)
lib/features/transcription/data/gemini_service.dart (3)

6-17: Consider adding error handling or custom exceptions.

Both methods let exceptions propagate directly from ChatbotService. While the controller does catch these, wrapping them in service-specific exceptions would provide better error context and make debugging easier.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/features/transcription/data/gemini_service.dart` around lines 6 - 17,
Wrap calls to _chatbotService.getGeminiResponse in generateSummary and
generatePrescription with try/catch and rethrow a service-specific exception
(e.g., GeminiServiceException or TranscriptionProcessingException) that includes
context (which method failed and the transcription snippet) and preserves the
original error (as cause or inner exception); update both generateSummary and
generatePrescription to catch any thrown exceptions from
_chatbotService.getGeminiResponse, build a descriptive message mentioning the
method name (generateSummary/generatePrescription) and the transcription, and
throw the custom exception with the original exception attached.

12-17: Unexplained hardcoded delay needs documentation or removal.

The 3-second delay before calling getGeminiResponse is unexplained. If this is for API rate limiting, consider:

  1. Adding a comment explaining the reason
  2. Making the delay configurable
  3. Using a proper rate-limiting/retry mechanism
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/features/transcription/data/gemini_service.dart` around lines 12 - 17,
The hardcoded 3-second Future.delayed in generatePrescription makes timing
behaviour opaque; remove the literal delay and either (A) document its purpose
or (B) make it configurable and/or replaced with a proper rate-limiting/retry
mechanism. Specifically, remove the await Future.delayed(const Duration(seconds:
3)); from generatePrescription and instead read a Duration (e.g.,
prescriptionDelay) injected via the class constructor or a configuration
service, add a one-line comment above generatePrescription explaining why any
delay exists, and if the delay was intended for API throttling replace it with a
rate limiter or retry logic that wraps _chatbotService.getGeminiResponse to
handle backoff and limits.

3-4: Consider dependency injection for testability.

Instantiating ChatbotService directly inside the class makes it difficult to mock in unit tests. Consider accepting ChatbotService as a constructor parameter with a default value.

♻️ Proposed refactor for dependency injection
 class GeminiService {
-  final ChatbotService _chatbotService = ChatbotService();
+  final ChatbotService _chatbotService;
+
+  GeminiService({ChatbotService? chatbotService})
+      : _chatbotService = chatbotService ?? ChatbotService();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/features/transcription/data/gemini_service.dart` around lines 3 - 4,
GeminiService currently constructs a ChatbotService internally which hinders
mocking; modify GeminiService to accept a ChatbotService parameter in its
constructor (e.g., GeminiService({ChatbotService? chatbotService}) :
_chatbotService = chatbotService ?? ChatbotService()) and keep the private field
final _chatbotService so callers/tests can inject a mock ChatbotService while
defaulting to a real instance when none is provided.
lib/features/transcription/presentation/transcription_controller.dart (3)

14-17: Consider dependency injection for services.

Instantiating AudioRecorder, DeepgramService, and GeminiService directly makes unit testing difficult. Consider accepting these as constructor parameters.

♻️ Proposed refactor for dependency injection
 class TranscriptionController extends ChangeNotifier {
-  final _audioRecorder = AudioRecorder();
-  final _deepgramService = DeepgramService();
-  final _geminiService = GeminiService();
+  final AudioRecorder _audioRecorder;
+  final DeepgramService _deepgramService;
+  final GeminiService _geminiService;
+
+  TranscriptionController({
+    AudioRecorder? audioRecorder,
+    DeepgramService? deepgramService,
+    GeminiService? geminiService,
+  })  : _audioRecorder = audioRecorder ?? AudioRecorder(),
+        _deepgramService = deepgramService ?? DeepgramService(),
+        _geminiService = geminiService ?? GeminiService();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/features/transcription/presentation/transcription_controller.dart` around
lines 14 - 17, The TranscriptionController currently instantiates AudioRecorder,
DeepgramService, and GeminiService directly (fields _audioRecorder,
_deepgramService, _geminiService); change the class to accept these dependencies
via constructor parameters (with optional defaults for production) so tests can
inject mocks: add constructor parameters for AudioRecorder, DeepgramService, and
GeminiService, assign them to the existing fields, and remove direct new
allocations inside the class body so unit tests can pass test doubles into
TranscriptionController.

115-128: Consider parallelizing independent Gemini calls.

generateSummary and generatePrescription are independent operations that could run concurrently. However, note that generatePrescription has a 3-second delay that may be intentional rate limiting.

♻️ Proposed parallel execution (if rate limiting permits)
   Future<void> _processWithGemini(String transcript) async {
     try {
-      final summary = await _geminiService.generateSummary(transcript);
-      final prescription = await _geminiService.generatePrescription(transcript);
+      final results = await Future.wait([
+        _geminiService.generateSummary(transcript),
+        _geminiService.generatePrescription(transcript),
+      ]);
+      final summary = results[0];
+      final prescription = results[1];

       data = data.copyWith(summary: summary, prescription: prescription);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/features/transcription/presentation/transcription_controller.dart` around
lines 115 - 128, _processWithGemini currently awaits
_geminiService.generateSummary and then _geminiService.generatePrescription
sequentially; change it to run those independent calls in parallel (e.g., kick
off both Futures and await them together with Future.wait or similar) so you
collect summary and prescription concurrently before calling data.copyWith and
updating state; keep the same error handling and preserve awareness of the
intentional 3-second delay/rate limiting on generatePrescription (add a comment
or a configuration gate if you need to keep it serialized in some environments).

130-137: Reuse Random instance instead of creating new ones.

Line 133 creates a new Random() instance on every timer tick (400 times/second for 40 values). Create a single instance as a field and reuse it.

♻️ Proposed fix to reuse Random instance
+  final _random = Random();
+
   void _startWaveformAnimation() {
     _waveformTimer = Timer.periodic(const Duration(milliseconds: 100), (_) {
       for (int i = 0; i < waveformValues.length; i++) {
-        waveformValues[i] = Random().nextDouble();
+        waveformValues[i] = _random.nextDouble();
       }
       notifyListeners();
     });
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/features/transcription/presentation/transcription_controller.dart` around
lines 130 - 137, The _startWaveformAnimation method currently instantiates a new
Random() on every tick; add a single Random field (e.g., final Random _rand =
Random(); or a late Random _rand initialized in the class constructor) to the
transcription controller and replace Random() in _startWaveformAnimation with
that field so waveformValues is filled using _rand.nextDouble(); keep existing
_waveformTimer, waveformValues updates and notifyListeners intact.
lib/features/transcription/presentation/transcription_screen.dart (1)

61-61: Consider using Color.withValues() instead of withOpacity().

withOpacity() is less performant as it creates new Color objects. Flutter now recommends withValues() for specifying opacity. This applies to Lines 61, 83, 202, and 203.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/features/transcription/presentation/transcription_screen.dart` at line
61, Replace uses of Colors.white.withOpacity(...) in TranscriptionScreen widget
with the newer withValues(...) API to avoid creating extra Color objects; locate
the occurrences inside transcription_screen.dart (the expressions using
Colors.white.withOpacity at the places referenced) and change them to
Colors.white.withValues(opacity: <same value>) (and do the same for the other
two occurrences mentioned) so the visual opacity remains identical but uses the
recommended withValues call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@lib/features/transcription/data/deepgram_service.dart`:
- Around line 18-25: The HTTP POST in DeepgramService that assigns to the
variable response uses http.post(...) without a timeout; update that call (the
http.post in the method that creates response) to apply a timeout (e.g.,
.timeout(const Duration(seconds: 30))) or switch to an http.Client with a
configured timeout, and handle TimeoutException by catching it and
logging/throwing a clear error. Ensure you modify the call that sets response
and add appropriate error handling so the method (DeepgramService) fails fast on
network stalls.
- Around line 27-33: The code decodes Deepgram's JSON and directly accesses
nested keys
(decodedResponse['results']['channels'][0]['alternatives'][0]['transcript']),
which can throw on malformed responses; update the parsing in the Deepgram
response handling (the block that produces decodedResponse and result) to
validate types and presence at each level (check decodedResponse is a Map, that
'results' and 'channels' exist and are Lists, that channels[0] has
'alternatives' which is a List, and that alternatives[0] contains 'transcript'),
use null-safe lookups and fallback values (e.g., return 'No speech detected'
when fields are missing/empty), and wrap decoding in a try/catch to throw a
clearer Exception mentioning the problematic payload or include the
response.body for debugging while preserving concise error text.
- Around line 6-7: The class DeepgramService currently defaults _apiKey to an
empty string which can cause silent failures; update DeepgramService to validate
the API key at construction (e.g., in the DeepgramService constructor) or at the
start of any public method that uses _apiKey (such as transcription/request
methods), and throw a clear exception or return an error if
dotenv.env['DEEPGRAM_API_KEY'] is null/empty; reference the _apiKey field and
the DeepgramService constructor or its public methods so callers get an explicit
failure instead of silent authentication issues.

In `@lib/features/transcription/presentation/transcription_controller.dart`:
- Around line 37-39: requestPermissions currently just requests
Permission.microphone without checking the result; change it to capture the
PermissionStatus from Permission.microphone.request(), return a boolean
(granted) or throw/return false on denial, and handle
PermissionStatus.permanentlyDenied by offering to open app settings. Update the
caller that invokes requestPermissions (the code that currently calls
requestPermissions() before starting recording) to check the returned status
and: if false show a user-visible error/permission dialog, and if
permanentlyDenied prompt to open app settings via openAppSettings(). Ensure you
reference and modify requestPermissions and the caller that starts recording so
the flow handles granted, denied, and permanentlyDenied cases explicitly.

---

Nitpick comments:
In `@lib/features/transcription/data/gemini_service.dart`:
- Around line 6-17: Wrap calls to _chatbotService.getGeminiResponse in
generateSummary and generatePrescription with try/catch and rethrow a
service-specific exception (e.g., GeminiServiceException or
TranscriptionProcessingException) that includes context (which method failed and
the transcription snippet) and preserves the original error (as cause or inner
exception); update both generateSummary and generatePrescription to catch any
thrown exceptions from _chatbotService.getGeminiResponse, build a descriptive
message mentioning the method name (generateSummary/generatePrescription) and
the transcription, and throw the custom exception with the original exception
attached.
- Around line 12-17: The hardcoded 3-second Future.delayed in
generatePrescription makes timing behaviour opaque; remove the literal delay and
either (A) document its purpose or (B) make it configurable and/or replaced with
a proper rate-limiting/retry mechanism. Specifically, remove the await
Future.delayed(const Duration(seconds: 3)); from generatePrescription and
instead read a Duration (e.g., prescriptionDelay) injected via the class
constructor or a configuration service, add a one-line comment above
generatePrescription explaining why any delay exists, and if the delay was
intended for API throttling replace it with a rate limiter or retry logic that
wraps _chatbotService.getGeminiResponse to handle backoff and limits.
- Around line 3-4: GeminiService currently constructs a ChatbotService
internally which hinders mocking; modify GeminiService to accept a
ChatbotService parameter in its constructor (e.g.,
GeminiService({ChatbotService? chatbotService}) : _chatbotService =
chatbotService ?? ChatbotService()) and keep the private field final
_chatbotService so callers/tests can inject a mock ChatbotService while
defaulting to a real instance when none is provided.

In `@lib/features/transcription/presentation/transcription_controller.dart`:
- Around line 14-17: The TranscriptionController currently instantiates
AudioRecorder, DeepgramService, and GeminiService directly (fields
_audioRecorder, _deepgramService, _geminiService); change the class to accept
these dependencies via constructor parameters (with optional defaults for
production) so tests can inject mocks: add constructor parameters for
AudioRecorder, DeepgramService, and GeminiService, assign them to the existing
fields, and remove direct new allocations inside the class body so unit tests
can pass test doubles into TranscriptionController.
- Around line 115-128: _processWithGemini currently awaits
_geminiService.generateSummary and then _geminiService.generatePrescription
sequentially; change it to run those independent calls in parallel (e.g., kick
off both Futures and await them together with Future.wait or similar) so you
collect summary and prescription concurrently before calling data.copyWith and
updating state; keep the same error handling and preserve awareness of the
intentional 3-second delay/rate limiting on generatePrescription (add a comment
or a configuration gate if you need to keep it serialized in some environments).
- Around line 130-137: The _startWaveformAnimation method currently instantiates
a new Random() on every tick; add a single Random field (e.g., final Random
_rand = Random(); or a late Random _rand initialized in the class constructor)
to the transcription controller and replace Random() in _startWaveformAnimation
with that field so waveformValues is filled using _rand.nextDouble(); keep
existing _waveformTimer, waveformValues updates and notifyListeners intact.

In `@lib/features/transcription/presentation/transcription_screen.dart`:
- Line 61: Replace uses of Colors.white.withOpacity(...) in TranscriptionScreen
widget with the newer withValues(...) API to avoid creating extra Color objects;
locate the occurrences inside transcription_screen.dart (the expressions using
Colors.white.withOpacity at the places referenced) and change them to
Colors.white.withValues(opacity: <same value>) (and do the same for the other
two occurrences mentioned) so the visual opacity remains identical but uses the
recommended withValues call.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ece30ed and 6c64ff1.

📒 Files selected for processing (7)
  • lib/features/transcription/data/deepgram_service.dart
  • lib/features/transcription/data/gemini_service.dart
  • lib/features/transcription/domain/transcription_model.dart
  • lib/features/transcription/presentation/transcription_controller.dart
  • lib/features/transcription/presentation/transcription_screen.dart
  • lib/main.dart
  • pubspec.yaml

@hasanravda hasanravda changed the title Hasan/refactor structure Refactor: Separate transcription concerns into service, domain, and presentation layers Feb 27, 2026
@hasanravda
Copy link
Copy Markdown
Contributor Author

Hey @jddeep @SharkyBytes can you review this PR and merge it ? so it avoid any further merge conflicts in the future !

@SharkyBytes
Copy link
Copy Markdown
Contributor

Resolve the coderabbit suggestions

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
lib/services/chatbot_service.dart (1)

65-67: ⚠️ Potential issue | 🟡 Minor

Exception message is misleading for non-network failures.

At Line 67, the catch message says "Could not connect to API" for all exceptions, including decode/shape errors. Please use a neutral failure message.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/services/chatbot_service.dart` around lines 65 - 67, The catch block
currently logs and returns "Could not connect to API" for all exceptions (see
the catch in chatbot_service.dart where print('Exception: $e') and return
"Error: Could not connect to API: $e" are used); change the returned error text
to a neutral message (e.g., "Error: Operation failed" or "Error: An unexpected
error occurred") while still including the exception details for debugging, and
ensure the printed log remains print('Exception: $e') or similar so non-network
errors (decode/shape/etc.) aren't mischaracterized as connection failures.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@lib/services/chatbot_service.dart`:
- Around line 26-27: Remove the raw print statements that dump full prompts,
responses and API bodies in lib/services/chatbot_service.dart (replace the
direct print calls that output prompt/response/raw API body); instead redact or
mask sensitive content and log only non-sensitive metadata (e.g., message
length, truncated hash, or "REDACTED" placeholders) via your app logger (locate
the prints in the method that sends data to Gemini / handles responses—the calls
printing '\n=== GEMINI PROMPT ===', the full prompt, full output, and the raw
API body) so no PHI/PII is written to logs while still retaining enough trace
info for debugging.

---

Outside diff comments:
In `@lib/services/chatbot_service.dart`:
- Around line 65-67: The catch block currently logs and returns "Could not
connect to API" for all exceptions (see the catch in chatbot_service.dart where
print('Exception: $e') and return "Error: Could not connect to API: $e" are
used); change the returned error text to a neutral message (e.g., "Error:
Operation failed" or "Error: An unexpected error occurred") while still
including the exception details for debugging, and ensure the printed log
remains print('Exception: $e') or similar so non-network errors
(decode/shape/etc.) aren't mischaracterized as connection failures.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d41224d8-ea84-4303-b3b4-a8f8d9d1ae0d

📥 Commits

Reviewing files that changed from the base of the PR and between 6c64ff1 and 8bf74da.

📒 Files selected for processing (3)
  • lib/features/transcription/data/deepgram_service.dart
  • lib/features/transcription/presentation/transcription_controller.dart
  • lib/services/chatbot_service.dart
🚧 Files skipped from review as they are similar to previous changes (1)
  • lib/features/transcription/presentation/transcription_controller.dart

@hasanravda
Copy link
Copy Markdown
Contributor Author

@SharkyBytes I have did the necessary improvements suggested by Coderabbit , review it and happy to know your feedback :)

@SharkyBytes
Copy link
Copy Markdown
Contributor

LGTM!

@SharkyBytes SharkyBytes merged commit 3f66d62 into AOSSIE-Org:main Mar 21, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Refactor TranscriptionScreen to Follow Separation of Concerns and Improve Maintainability

2 participants