feat(ai): Use Gemini 3 Pro for audit analyses (GBP, SEO, social media)
Some checks are pending
NordaBiz Tests / Unit & Integration Tests (push) Waiting to run
NordaBiz Tests / E2E Tests (Playwright) (push) Blocked by required conditions
NordaBiz Tests / Smoke Tests (Production) (push) Blocked by required conditions
NordaBiz Tests / Send Failure Notification (push) Blocked by required conditions
Some checks are pending
NordaBiz Tests / Unit & Integration Tests (push) Waiting to run
NordaBiz Tests / E2E Tests (Playwright) (push) Blocked by required conditions
NordaBiz Tests / Smoke Tests (Production) (push) Blocked by required conditions
NordaBiz Tests / Send Failure Notification (push) Blocked by required conditions
- Add per-call model override parameter to generate_text() - GBP audit, SEO/social audit analysis, and audit content generation now use gemini-3-pro-preview for highest quality reasoning - Chat and other features remain on 3-flash (cheaper, faster) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
parent
3d26ea6119
commit
a6d7fc343e
@ -625,6 +625,7 @@ def generate_analysis(company_id: int, audit_type: str, user_id: int = None, for
|
||||
user_id=user_id,
|
||||
company_id=company_id,
|
||||
related_entity_type=f'{audit_type}_audit',
|
||||
model='3-pro',
|
||||
)
|
||||
|
||||
if not response_text:
|
||||
@ -812,6 +813,7 @@ Max 500 słów."""
|
||||
user_id=user_id,
|
||||
company_id=company_id,
|
||||
related_entity_type=f'audit_action_{action_type}',
|
||||
model='3-pro',
|
||||
)
|
||||
|
||||
if not content:
|
||||
|
||||
@ -1308,7 +1308,8 @@ class GBPAuditService:
|
||||
feature='gbp_audit_ai',
|
||||
user_id=user_id,
|
||||
temperature=0.7,
|
||||
max_tokens=2000
|
||||
max_tokens=2000,
|
||||
model='3-pro'
|
||||
)
|
||||
|
||||
# Parse AI response
|
||||
|
||||
@ -235,12 +235,13 @@ class GeminiService:
|
||||
user_id: Optional[int] = None,
|
||||
company_id: Optional[int] = None,
|
||||
related_entity_type: Optional[str] = None,
|
||||
related_entity_id: Optional[int] = None
|
||||
related_entity_id: Optional[int] = None,
|
||||
model: Optional[str] = None
|
||||
) -> str:
|
||||
"""
|
||||
Generate text using Gemini API with automatic fallback, cost tracking and thinking mode.
|
||||
|
||||
On 429 RESOURCE_EXHAUSTED, automatically retries with the next model in fallback chain.
|
||||
On 429/503, automatically retries with the next model in fallback chain.
|
||||
|
||||
Args:
|
||||
prompt: Text prompt to send to the model
|
||||
@ -253,6 +254,7 @@ class GeminiService:
|
||||
company_id: Optional company ID for context
|
||||
related_entity_type: Entity type ('zopk_news', 'chat_message', etc.)
|
||||
related_entity_id: Entity ID for reference
|
||||
model: Override model for this call (alias like '3-pro' or full name like 'gemini-3-pro-preview')
|
||||
|
||||
Returns:
|
||||
Generated text response
|
||||
@ -260,8 +262,13 @@ class GeminiService:
|
||||
Raises:
|
||||
Exception: If API call fails on all models
|
||||
"""
|
||||
# Resolve model override: accept alias ('3-pro') or full name ('gemini-3-pro-preview')
|
||||
primary_model = self.model_name
|
||||
if model:
|
||||
primary_model = GEMINI_MODELS.get(model, model)
|
||||
|
||||
# Build ordered list of models to try: primary first, then fallbacks
|
||||
models_to_try = [self.model_name]
|
||||
models_to_try = [primary_model]
|
||||
for m in self.fallback_models:
|
||||
if m not in models_to_try:
|
||||
models_to_try.append(m)
|
||||
|
||||
Loading…
Reference in New Issue
Block a user