diff --git a/apps/docs/content/docs/en/blocks/human-in-the-loop.mdx b/apps/docs/content/docs/en/blocks/human-in-the-loop.mdx index d4e4705e22b..04f210ad17b 100644 --- a/apps/docs/content/docs/en/blocks/human-in-the-loop.mdx +++ b/apps/docs/content/docs/en/blocks/human-in-the-loop.mdx @@ -78,7 +78,7 @@ Defines the fields approvers fill in when responding. This data becomes availabl } ``` -Access resume data in downstream blocks using ``. +Access resume data in downstream blocks using ``. ## Approval Methods @@ -93,11 +93,12 @@ Access resume data in downstream blocks using ``. ### REST API - Programmatically resume workflows using the resume endpoint. The `contextId` is available from the block's `resumeEndpoint` output or from the paused execution detail. + Programmatically resume workflows using the resume endpoint. The `contextId` is available from the block's `resumeEndpoint` output or from the `_resume` object in the paused execution response. ```bash POST /api/resume/{workflowId}/{executionId}/{contextId} Content-Type: application/json + X-API-Key: your-api-key { "input": { @@ -107,23 +108,44 @@ Access resume data in downstream blocks using ``. } ``` - The response includes a new `executionId` for the resumed execution: + The resume endpoint automatically respects the execution mode used in the original execute call: + + - **Sync mode** (default) — The response waits for the remaining workflow to complete and returns the full result: + + ```json + { + "success": true, + "status": "completed", + "executionId": "", + "output": { ... }, + "metadata": { "duration": 1234, "startTime": "...", "endTime": "..." } + } + ``` + + If the resumed workflow hits another HITL block, the response returns `"status": "paused"` with new `_resume` URLs in the output. + + - **Stream mode** (`stream: true` on the original execute call) — The resume response streams SSE events with `selectedOutputs` chunks, just like the initial execution. + + - **Async mode** (`X-Execution-Mode: async` on the original execute call) — The resume dispatches execution to a background worker and returns immediately with `202`: ```json { "status": "started", "executionId": "", - "message": "Resume execution started." + "message": "Resume execution started asynchronously." } ``` - To poll execution progress after resuming, connect to the SSE stream: + #### Polling execution status + + To check on a paused execution or poll for completion after an async resume: ```bash - GET /api/workflows/{workflowId}/executions/{resumeExecutionId}/stream + GET /api/resume/{workflowId}/{executionId} + X-API-Key: your-api-key ``` - Build custom approval UIs or integrate with existing systems. + Returns the full paused execution detail with all pause points, their statuses, and resume links. Returns `404` when the execution has completed and is no longer paused. ### Webhook @@ -132,6 +154,53 @@ Access resume data in downstream blocks using ``. +## API Execute Behavior + +When triggering a workflow via the execute API (`POST /api/workflows/{id}/execute`), HITL blocks cause the execution to pause and return the `_resume` data in the response: + + + + The response includes the full pause data with resume URLs: + + ```json + { + "success": true, + "executionId": "", + "output": { + "data": { + "operation": "human", + "_resume": { + "apiUrl": "/api/resume/{workflowId}/{executionId}/{contextId}", + "uiUrl": "/resume/{workflowId}/{executionId}", + "contextId": "", + "executionId": "", + "workflowId": "" + } + } + } + } + ``` + + + Blocks before the HITL stream their `selectedOutputs` normally. When execution pauses, the final SSE event includes `status: "paused"` and the `_resume` data: + + ``` + data: {"blockId":"agent1","chunk":"streamed content..."} + data: {"event":"final","data":{"success":true,"output":{...,"_resume":{...}},"status":"paused"}} + data: "[DONE]" + ``` + + On resume, blocks after the HITL stream their `selectedOutputs` the same way. + + + HITL blocks are automatically excluded from the `selectedOutputs` dropdown since their data is always included in the pause response. + + + + Returns `202` immediately. Use the polling endpoint to check when the execution pauses. + + + ## Common Use Cases **Content Approval** - Review AI-generated content before publishing @@ -161,9 +230,9 @@ Agent (Generate) → Human in the Loop (QA) → Gmail (Send) **`response`** - Display data shown to the approver (json) **`submission`** - Form submission data from the approver (json) **`submittedAt`** - ISO timestamp when the workflow was resumed -**`resumeInput.*`** - All fields defined in Resume Form become available after the workflow resumes +**``** - All fields defined in Resume Form become available at the top level after the workflow resumes -Access using ``. +Access using ``. ## Example @@ -187,7 +256,7 @@ Access using ``. **Downstream Usage:** ```javascript // Condition block - === true + === true ``` The example below shows an approval portal as seen by an approver after the workflow is paused. Approvers can review the data and provide inputs as a part of the workflow resumption. The approval portal can be accessed directly via the unique URL, ``. @@ -204,7 +273,7 @@ The example below shows an approval portal as seen by an approver after the work to reference specific fields from the resume form. For example, if your block ID is 'approval1' and the form has an 'approved' field, use ." }, + { question: "How do I access the approver's input in downstream blocks?", answer: "Use the syntax to reference specific fields from the resume form. For example, if your block name is 'approval1' and the form has an 'approved' field, use ." }, { question: "Can I chain multiple Human in the Loop blocks for multi-stage approvals?", answer: "Yes. You can place multiple Human in the Loop blocks in sequence to create multi-stage approval workflows. Each block pauses independently and can have its own notification configuration and resume form fields." }, { question: "Can I resume the workflow programmatically without the portal?", answer: "Yes. Each block exposes a resume API endpoint that you can call with a POST request containing the form data as JSON. This lets you build custom approval UIs or integrate with existing systems like Jira or ServiceNow." }, { question: "What outputs are available after the workflow resumes?", answer: "The block outputs include the approval portal URL, the resume API endpoint URL, the display data shown to the approver, the form submission data, the raw resume input, and an ISO timestamp of when the workflow was resumed." }, diff --git a/apps/docs/content/docs/en/tools/athena.mdx b/apps/docs/content/docs/en/tools/athena.mdx index d77394bcefc..698188291f6 100644 --- a/apps/docs/content/docs/en/tools/athena.mdx +++ b/apps/docs/content/docs/en/tools/athena.mdx @@ -113,7 +113,7 @@ Retrieve the results of a completed Athena query execution | `awsAccessKeyId` | string | Yes | AWS access key ID | | `awsSecretAccessKey` | string | Yes | AWS secret access key | | `queryExecutionId` | string | Yes | Query execution ID to get results for | -| `maxResults` | number | No | Maximum number of rows to return \(1-1000\) | +| `maxResults` | number | No | Maximum number of rows to return \(1-999\) | | `nextToken` | string | No | Pagination token from a previous request | #### Output diff --git a/apps/docs/content/docs/en/tools/cloudwatch.mdx b/apps/docs/content/docs/en/tools/cloudwatch.mdx index a3c5757a87a..1fc1b19ea75 100644 --- a/apps/docs/content/docs/en/tools/cloudwatch.mdx +++ b/apps/docs/content/docs/en/tools/cloudwatch.mdx @@ -10,6 +10,24 @@ import { BlockInfoCard } from "@/components/ui/block-info-card" color="linear-gradient(45deg, #B0084D 0%, #FF4F8B 100%)" /> +{/* MANUAL-CONTENT-START:intro */} +[AWS CloudWatch](https://aws.amazon.com/cloudwatch/) is a monitoring and observability service that provides data and actionable insights for AWS resources, applications, and services. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, giving you a unified view of your AWS environment. + +With the CloudWatch integration, you can: + +- **Query Logs (Insights)**: Run CloudWatch Log Insights queries against one or more log groups to analyze log data with a powerful query language +- **Describe Log Groups**: List available CloudWatch log groups in your account, optionally filtered by name prefix +- **Get Log Events**: Retrieve log events from a specific log stream within a log group +- **Describe Log Streams**: List log streams within a log group, ordered by last event time or filtered by name prefix +- **List Metrics**: Browse available CloudWatch metrics, optionally filtered by namespace, metric name, or recent activity +- **Get Metric Statistics**: Retrieve statistical data for a metric over a specified time range with configurable granularity +- **Publish Metric**: Publish custom metric data points to CloudWatch for your own application monitoring +- **Describe Alarms**: List and filter CloudWatch alarms by name prefix, state, or alarm type + +In Sim, the CloudWatch integration enables your agents to monitor AWS infrastructure, analyze application logs, track custom metrics, and respond to alarm states as part of automated DevOps and SRE workflows. This is especially powerful when combined with other AWS integrations like CloudFormation and SNS for end-to-end infrastructure management. +{/* MANUAL-CONTENT-END */} + + ## Usage Instructions Integrate AWS CloudWatch into workflows. Run Log Insights queries, list log groups, retrieve log events, list and get metrics, and monitor alarms. Requires AWS access key and secret access key. @@ -155,6 +173,34 @@ Get statistics for a CloudWatch metric over a time range | `label` | string | Metric label | | `datapoints` | array | Datapoints with timestamp and statistics values | +### `cloudwatch_put_metric_data` + +Publish a custom metric data point to CloudWatch + +#### Input + +| Parameter | Type | Required | Description | +| --------- | ---- | -------- | ----------- | +| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) | +| `awsAccessKeyId` | string | Yes | AWS access key ID | +| `awsSecretAccessKey` | string | Yes | AWS secret access key | +| `namespace` | string | Yes | Metric namespace \(e.g., Custom/MyApp\) | +| `metricName` | string | Yes | Name of the metric | +| `value` | number | Yes | Metric value to publish | +| `unit` | string | No | Unit of the metric \(e.g., Count, Seconds, Bytes\) | +| `dimensions` | string | No | JSON string of dimension name/value pairs | + +#### Output + +| Parameter | Type | Description | +| --------- | ---- | ----------- | +| `success` | boolean | Whether the metric was published successfully | +| `namespace` | string | Metric namespace | +| `metricName` | string | Metric name | +| `value` | number | Published metric value | +| `unit` | string | Metric unit | +| `timestamp` | string | Timestamp when the metric was published | + ### `cloudwatch_describe_alarms` List and filter CloudWatch alarms diff --git a/apps/docs/content/docs/en/tools/jira_service_management.mdx b/apps/docs/content/docs/en/tools/jira_service_management.mdx index cd294152d3e..533acee20ad 100644 --- a/apps/docs/content/docs/en/tools/jira_service_management.mdx +++ b/apps/docs/content/docs/en/tools/jira_service_management.mdx @@ -113,10 +113,11 @@ Create a new service request in Jira Service Management | `cloudId` | string | No | Jira Cloud ID for the instance | | `serviceDeskId` | string | Yes | Service Desk ID \(e.g., "1", "2"\) | | `requestTypeId` | string | Yes | Request Type ID \(e.g., "10", "15"\) | -| `summary` | string | Yes | Summary/title for the service request | +| `summary` | string | No | Summary/title for the service request \(required unless using Form Answers\) | | `description` | string | No | Description for the service request | | `raiseOnBehalfOf` | string | No | Account ID of customer to raise request on behalf of | | `requestFieldValues` | json | No | Request field values as key-value pairs \(overrides summary/description if provided\) | +| `formAnswers` | json | No | Form answers for form-based request types \(e.g., \{"summary": \{"text": "Title"\}, "customfield_10010": \{"choices": \["10320"\]\}\}\) | | `requestParticipants` | string | No | Comma-separated account IDs to add as request participants | | `channel` | string | No | Channel the request originates from \(e.g., portal, email\) | diff --git a/apps/sim/app/(landing)/integrations/data/integrations.json b/apps/sim/app/(landing)/integrations/data/integrations.json index e6658415998..a05fcbb7eff 100644 --- a/apps/sim/app/(landing)/integrations/data/integrations.json +++ b/apps/sim/app/(landing)/integrations/data/integrations.json @@ -2044,12 +2044,16 @@ "name": "Get Metric Statistics", "description": "Get statistics for a CloudWatch metric over a time range" }, + { + "name": "Publish Metric", + "description": "Publish a custom metric data point to CloudWatch" + }, { "name": "Describe Alarms", "description": "List and filter CloudWatch alarms" } ], - "operationCount": 7, + "operationCount": 8, "triggers": [], "triggerCount": 0, "authType": "none", diff --git a/apps/sim/app/api/auth/socket-token/route.ts b/apps/sim/app/api/auth/socket-token/route.ts index 2a6965ee06b..810f149b8bb 100644 --- a/apps/sim/app/api/auth/socket-token/route.ts +++ b/apps/sim/app/api/auth/socket-token/route.ts @@ -23,6 +23,18 @@ export async function POST() { return NextResponse.json({ token: response.token }) } catch (error) { + // better-auth's sessionMiddleware throws APIError("UNAUTHORIZED") with no message + // when the session is missing/expired — surface this as a 401, not a 500. + if ( + error instanceof Error && + ('statusCode' in error || 'status' in error) && + ((error as Record).statusCode === 401 || + (error as Record).status === 'UNAUTHORIZED') + ) { + logger.warn('Socket token request with invalid/expired session') + return NextResponse.json({ error: 'Authentication required' }, { status: 401 }) + } + logger.error('Failed to generate socket token', { error: error instanceof Error ? error.message : String(error), stack: error instanceof Error ? error.stack : undefined, diff --git a/apps/sim/app/api/chat/[identifier]/route.test.ts b/apps/sim/app/api/chat/[identifier]/route.test.ts index 31d3a0bfde4..dac5048fc86 100644 --- a/apps/sim/app/api/chat/[identifier]/route.test.ts +++ b/apps/sim/app/api/chat/[identifier]/route.test.ts @@ -140,6 +140,10 @@ vi.mock('@/lib/workflows/streaming/streaming', () => ({ createStreamingResponse: vi.fn().mockImplementation(async () => createMockStream()), })) +vi.mock('@/lib/workflows/executor/execute-workflow', () => ({ + executeWorkflow: vi.fn().mockResolvedValue({ success: true, output: {} }), +})) + vi.mock('@/lib/core/utils/sse', () => ({ SSE_HEADERS: { 'Content-Type': 'text/event-stream', @@ -410,14 +414,7 @@ describe('Chat Identifier API Route', () => { expect(createStreamingResponse).toHaveBeenCalledWith( expect.objectContaining({ - workflow: expect.objectContaining({ - id: 'workflow-id', - userId: 'user-id', - }), - input: expect.objectContaining({ - input: 'Hello world', - conversationId: 'conv-123', - }), + executeFn: expect.any(Function), streamConfig: expect.objectContaining({ isSecureMode: true, workflowTriggerType: 'chat', @@ -494,9 +491,9 @@ describe('Chat Identifier API Route', () => { expect(createStreamingResponse).toHaveBeenCalledWith( expect.objectContaining({ - input: expect.objectContaining({ - input: 'Hello world', - conversationId: 'test-conversation-123', + executeFn: expect.any(Function), + streamConfig: expect.objectContaining({ + workflowTriggerType: 'chat', }), }) ) @@ -510,9 +507,7 @@ describe('Chat Identifier API Route', () => { expect(createStreamingResponse).toHaveBeenCalledWith( expect.objectContaining({ - input: expect.objectContaining({ - input: 'Hello world', - }), + executeFn: expect.any(Function), }) ) }) diff --git a/apps/sim/app/api/chat/[identifier]/route.ts b/apps/sim/app/api/chat/[identifier]/route.ts index 3f6e14a41f7..de826d8c7e9 100644 --- a/apps/sim/app/api/chat/[identifier]/route.ts +++ b/apps/sim/app/api/chat/[identifier]/route.ts @@ -199,6 +199,7 @@ export async function POST( } const { createStreamingResponse } = await import('@/lib/workflows/streaming/streaming') + const { executeWorkflow } = await import('@/lib/workflows/executor/execute-workflow') const { SSE_HEADERS } = await import('@/lib/core/utils/sse') const workflowInput: any = { input, conversationId } @@ -252,15 +253,31 @@ export async function POST( const stream = await createStreamingResponse({ requestId, - workflow: workflowForExecution, - input: workflowInput, - executingUserId: workspaceOwnerId, streamConfig: { selectedOutputs, isSecureMode: true, workflowTriggerType: 'chat', }, executionId, + executeFn: async ({ onStream, onBlockComplete, abortSignal }) => + executeWorkflow( + workflowForExecution, + requestId, + workflowInput, + workspaceOwnerId, + { + enabled: true, + selectedOutputs, + isSecureMode: true, + workflowTriggerType: 'chat', + onStream, + onBlockComplete, + skipLoggingComplete: true, + abortSignal, + executionMode: 'stream', + }, + executionId + ), }) const streamResponse = new NextResponse(stream, { diff --git a/apps/sim/app/api/form/[identifier]/route.ts b/apps/sim/app/api/form/[identifier]/route.ts index f69ab9e1886..beee4876b03 100644 --- a/apps/sim/app/api/form/[identifier]/route.ts +++ b/apps/sim/app/api/form/[identifier]/route.ts @@ -9,6 +9,7 @@ import { generateRequestId } from '@/lib/core/utils/request' import { generateId } from '@/lib/core/utils/uuid' import { preprocessExecution } from '@/lib/execution/preprocessing' import { LoggingSession } from '@/lib/logs/execution/logging-session' +import { executeWorkflow } from '@/lib/workflows/executor/execute-workflow' import { normalizeInputFormatValue } from '@/lib/workflows/input-format' import { createStreamingResponse } from '@/lib/workflows/streaming/streaming' import { isInputDefinitionTrigger } from '@/lib/workflows/triggers/input-definition-triggers' @@ -216,18 +217,33 @@ export async function POST( ...formData, // Spread form fields at top level for convenience } - // Execute workflow using streaming (for consistency with chat) const stream = await createStreamingResponse({ requestId, - workflow: workflowForExecution, - input: workflowInput, - executingUserId: workspaceOwnerId, streamConfig: { selectedOutputs: [], isSecureMode: true, - workflowTriggerType: 'api', // Use 'api' type since form is similar + workflowTriggerType: 'api', }, executionId, + executeFn: async ({ onStream, onBlockComplete, abortSignal }) => + executeWorkflow( + workflowForExecution, + requestId, + workflowInput, + workspaceOwnerId, + { + enabled: true, + selectedOutputs: [], + isSecureMode: true, + workflowTriggerType: 'api', + onStream, + onBlockComplete, + skipLoggingComplete: true, + abortSignal, + executionMode: 'sync', + }, + executionId + ), }) // For forms, we don't stream back - we wait for completion and return success diff --git a/apps/sim/app/api/resume/[workflowId]/[executionId]/[contextId]/route.ts b/apps/sim/app/api/resume/[workflowId]/[executionId]/[contextId]/route.ts index 1a75d8aa598..8c060d9d13e 100644 --- a/apps/sim/app/api/resume/[workflowId]/[executionId]/[contextId]/route.ts +++ b/apps/sim/app/api/resume/[workflowId]/[executionId]/[contextId]/route.ts @@ -1,19 +1,44 @@ import { createLogger } from '@sim/logger' import { type NextRequest, NextResponse } from 'next/server' import { AuthType } from '@/lib/auth/hybrid' +import { getJobQueue, shouldUseBullMQ } from '@/lib/core/async-jobs' +import { createBullMQJobData } from '@/lib/core/bullmq' import { generateRequestId } from '@/lib/core/utils/request' +import { SSE_HEADERS } from '@/lib/core/utils/sse' +import { getBaseUrl } from '@/lib/core/utils/urls' import { generateId } from '@/lib/core/utils/uuid' +import { enqueueWorkspaceDispatch } from '@/lib/core/workspace-dispatch' import { setExecutionMeta } from '@/lib/execution/event-buffer' import { preprocessExecution } from '@/lib/execution/preprocessing' import { PauseResumeManager } from '@/lib/workflows/executor/human-in-the-loop-manager' +import { createStreamingResponse } from '@/lib/workflows/streaming/streaming' import { getWorkspaceBilledAccountUserId } from '@/lib/workspaces/utils' import { validateWorkflowAccess } from '@/app/api/workflows/middleware' +import type { ResumeExecutionPayload } from '@/background/resume-execution' +import { ExecutionSnapshot } from '@/executor/execution/snapshot' +import type { SerializedSnapshot } from '@/executor/types' const logger = createLogger('WorkflowResumeAPI') export const runtime = 'nodejs' export const dynamic = 'force-dynamic' +function getStoredSnapshotConfig(pausedExecution: { executionSnapshot: unknown }): { + executionMode?: 'sync' | 'stream' | 'async' + selectedOutputs?: string[] +} { + try { + const serialized = pausedExecution.executionSnapshot as SerializedSnapshot + const snapshot = ExecutionSnapshot.fromJSON(serialized.snapshot) + return { + executionMode: snapshot.metadata.executionMode, + selectedOutputs: snapshot.selectedOutputs, + } + } catch { + return {} + } +} + export async function POST( request: NextRequest, { @@ -24,7 +49,6 @@ export async function POST( ) { const { workflowId, executionId, contextId } = await params - // Allow resume from dashboard without requiring deployment const access = await validateWorkflowAccess(request, workflowId, false) if (access.error) { return NextResponse.json({ error: access.error.message }, { status: access.error.status }) @@ -74,12 +98,12 @@ export async function POST( const preprocessResult = await preprocessExecution({ workflowId, userId, - triggerType: 'manual', // Resume is a manual trigger + triggerType: 'manual', executionId: resumeExecutionId, requestId, - checkRateLimit: false, // Manual triggers bypass rate limits - checkDeployment: false, // Resuming existing execution, deployment already checked - skipUsageLimits: true, // Resume is continuation of authorized execution - don't recheck limits + checkRateLimit: false, + checkDeployment: false, + skipUsageLimits: true, useAuthenticatedUserAsActor: isPersonalApiKeyCaller, workspaceId: workflow.workspaceId || undefined, }) @@ -142,8 +166,35 @@ export async function POST( } const isApiCaller = access.auth?.authType === AuthType.API_KEY + const snapshotConfig = isApiCaller ? getStoredSnapshotConfig(enqueueResult.pausedExecution) : {} + const executionMode = isApiCaller ? (snapshotConfig.executionMode ?? 'sync') : undefined - if (isApiCaller) { + if (isApiCaller && executionMode === 'stream') { + const stream = await createStreamingResponse({ + requestId, + streamConfig: { + selectedOutputs: snapshotConfig.selectedOutputs, + timeoutMs: preprocessResult.executionTimeout?.sync, + }, + executionId: enqueueResult.resumeExecutionId, + executeFn: async ({ onStream, onBlockComplete, abortSignal }) => + PauseResumeManager.startResumeExecution({ + ...resumeArgs, + onStream, + onBlockComplete, + abortSignal, + }), + }) + + return new NextResponse(stream, { + headers: { + ...SSE_HEADERS, + 'X-Execution-Id': enqueueResult.resumeExecutionId, + }, + }) + } + + if (isApiCaller && executionMode !== 'async') { const result = await PauseResumeManager.startResumeExecution(resumeArgs) return NextResponse.json({ @@ -162,6 +213,68 @@ export async function POST( }) } + if (isApiCaller && executionMode === 'async') { + const resumePayload: ResumeExecutionPayload = { + resumeEntryId: enqueueResult.resumeEntryId, + resumeExecutionId: enqueueResult.resumeExecutionId, + pausedExecutionId: enqueueResult.pausedExecution.id, + contextId: enqueueResult.contextId, + resumeInput: enqueueResult.resumeInput, + userId: enqueueResult.userId, + workflowId, + parentExecutionId: executionId, + } + + let jobId: string + try { + const useBullMQ = shouldUseBullMQ() + if (useBullMQ) { + jobId = await enqueueWorkspaceDispatch({ + id: enqueueResult.resumeExecutionId, + workspaceId: workflow.workspaceId, + lane: 'runtime', + queueName: 'resume-execution', + bullmqJobName: 'resume-execution', + bullmqPayload: createBullMQJobData(resumePayload, { + workflowId, + userId, + }), + metadata: { workflowId, userId }, + }) + } else { + const jobQueue = await getJobQueue() + jobId = await jobQueue.enqueue('resume-execution', resumePayload, { + metadata: { workflowId, workspaceId: workflow.workspaceId, userId }, + }) + } + logger.info('Enqueued async resume execution', { + jobId, + resumeExecutionId: enqueueResult.resumeExecutionId, + }) + } catch (dispatchError) { + logger.error('Failed to dispatch async resume execution', { + error: dispatchError instanceof Error ? dispatchError.message : String(dispatchError), + resumeExecutionId: enqueueResult.resumeExecutionId, + }) + return NextResponse.json( + { error: 'Failed to queue resume execution. Please try again.' }, + { status: 503 } + ) + } + + return NextResponse.json( + { + success: true, + async: true, + jobId, + executionId: enqueueResult.resumeExecutionId, + message: 'Resume execution queued', + statusUrl: `${getBaseUrl()}/api/jobs/${jobId}`, + }, + { status: 202 } + ) + } + PauseResumeManager.startResumeExecution(resumeArgs).catch((error) => { logger.error('Failed to start resume execution', { workflowId, @@ -200,7 +313,6 @@ export async function GET( ) { const { workflowId, executionId, contextId } = await params - // Allow access without API key for browser-based UI (same as parent execution endpoint) const access = await validateWorkflowAccess(request, workflowId, false) if (access.error) { return NextResponse.json({ error: access.error.message }, { status: access.error.status }) diff --git a/apps/sim/app/api/speech/token/route.ts b/apps/sim/app/api/speech/token/route.ts index 9e55c50084c..b4a5835b9eb 100644 --- a/apps/sim/app/api/speech/token/route.ts +++ b/apps/sim/app/api/speech/token/route.ts @@ -4,7 +4,7 @@ import { createLogger } from '@sim/logger' import { eq } from 'drizzle-orm' import { type NextRequest, NextResponse } from 'next/server' import { getSession } from '@/lib/auth' -import { hasExceededCostLimit } from '@/lib/billing/core/subscription' +import { checkServerSideUsageLimits } from '@/lib/billing/calculations/usage-monitor' import { recordUsage } from '@/lib/billing/core/usage-log' import { env } from '@/lib/core/config/env' import { getCostMultiplier, isBillingEnabled } from '@/lib/core/config/feature-flags' @@ -110,11 +110,14 @@ export async function POST(request: NextRequest) { } } - if (billingUserId && isBillingEnabled) { - const exceeded = await hasExceededCostLimit(billingUserId) - if (exceeded) { + if (billingUserId) { + const usageCheck = await checkServerSideUsageLimits(billingUserId) + if (usageCheck.isExceeded) { return NextResponse.json( - { error: 'Usage limit exceeded. Please upgrade your plan to continue.' }, + { + error: + usageCheck.message || 'Usage limit exceeded. Please upgrade your plan to continue.', + }, { status: 402 } ) } diff --git a/apps/sim/app/api/tools/cloudwatch/describe-alarms/route.ts b/apps/sim/app/api/tools/cloudwatch/describe-alarms/route.ts index 3fc65ab5bfd..b4983ee6619 100644 --- a/apps/sim/app/api/tools/cloudwatch/describe-alarms/route.ts +++ b/apps/sim/app/api/tools/cloudwatch/describe-alarms/route.ts @@ -51,7 +51,9 @@ export async function POST(request: NextRequest) { const command = new DescribeAlarmsCommand({ ...(validatedData.alarmNamePrefix && { AlarmNamePrefix: validatedData.alarmNamePrefix }), ...(validatedData.stateValue && { StateValue: validatedData.stateValue as StateValue }), - ...(validatedData.alarmType && { AlarmTypes: [validatedData.alarmType as AlarmType] }), + AlarmTypes: validatedData.alarmType + ? [validatedData.alarmType as AlarmType] + : (['MetricAlarm', 'CompositeAlarm'] as AlarmType[]), ...(validatedData.limit !== undefined && { MaxRecords: validatedData.limit }), }) diff --git a/apps/sim/app/api/tools/cloudwatch/get-metric-statistics/route.ts b/apps/sim/app/api/tools/cloudwatch/get-metric-statistics/route.ts index 677bafca3ca..55d333a6d49 100644 --- a/apps/sim/app/api/tools/cloudwatch/get-metric-statistics/route.ts +++ b/apps/sim/app/api/tools/cloudwatch/get-metric-statistics/route.ts @@ -53,7 +53,7 @@ export async function POST(request: NextRequest) { })) } } catch { - throw new Error('Invalid dimensions JSON') + return NextResponse.json({ error: 'Invalid dimensions JSON format' }, { status: 400 }) } } diff --git a/apps/sim/app/api/tools/cloudwatch/put-metric-data/route.ts b/apps/sim/app/api/tools/cloudwatch/put-metric-data/route.ts new file mode 100644 index 00000000000..8712b8200c8 --- /dev/null +++ b/apps/sim/app/api/tools/cloudwatch/put-metric-data/route.ts @@ -0,0 +1,136 @@ +import { + CloudWatchClient, + PutMetricDataCommand, + type StandardUnit, +} from '@aws-sdk/client-cloudwatch' +import { createLogger } from '@sim/logger' +import { type NextRequest, NextResponse } from 'next/server' +import { z } from 'zod' +import { checkInternalAuth } from '@/lib/auth/hybrid' + +const logger = createLogger('CloudWatchPutMetricData') + +const VALID_UNITS = [ + 'Seconds', + 'Microseconds', + 'Milliseconds', + 'Bytes', + 'Kilobytes', + 'Megabytes', + 'Gigabytes', + 'Terabytes', + 'Bits', + 'Kilobits', + 'Megabits', + 'Gigabits', + 'Terabits', + 'Percent', + 'Count', + 'Bytes/Second', + 'Kilobytes/Second', + 'Megabytes/Second', + 'Gigabytes/Second', + 'Terabytes/Second', + 'Bits/Second', + 'Kilobits/Second', + 'Megabits/Second', + 'Gigabits/Second', + 'Terabits/Second', + 'Count/Second', + 'None', +] as const + +const PutMetricDataSchema = z.object({ + region: z.string().min(1, 'AWS region is required'), + accessKeyId: z.string().min(1, 'AWS access key ID is required'), + secretAccessKey: z.string().min(1, 'AWS secret access key is required'), + namespace: z.string().min(1, 'Namespace is required'), + metricName: z.string().min(1, 'Metric name is required'), + value: z.number({ coerce: true }).refine((v) => Number.isFinite(v), { + message: 'Metric value must be a finite number', + }), + unit: z.enum(VALID_UNITS).optional(), + dimensions: z + .string() + .optional() + .refine( + (val) => { + if (!val) return true + try { + const parsed = JSON.parse(val) + return typeof parsed === 'object' && parsed !== null && !Array.isArray(parsed) + } catch { + return false + } + }, + { message: 'dimensions must be a valid JSON object string' } + ), +}) + +export async function POST(request: NextRequest) { + try { + const auth = await checkInternalAuth(request) + if (!auth.success || !auth.userId) { + return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 }) + } + + const body = await request.json() + const validatedData = PutMetricDataSchema.parse(body) + + const client = new CloudWatchClient({ + region: validatedData.region, + credentials: { + accessKeyId: validatedData.accessKeyId, + secretAccessKey: validatedData.secretAccessKey, + }, + }) + + const timestamp = new Date() + + const dimensions: { Name: string; Value: string }[] = [] + if (validatedData.dimensions) { + const parsed = JSON.parse(validatedData.dimensions) + for (const [name, value] of Object.entries(parsed)) { + dimensions.push({ Name: name, Value: String(value) }) + } + } + + const command = new PutMetricDataCommand({ + Namespace: validatedData.namespace, + MetricData: [ + { + MetricName: validatedData.metricName, + Value: validatedData.value, + Timestamp: timestamp, + ...(validatedData.unit && { Unit: validatedData.unit as StandardUnit }), + ...(dimensions.length > 0 && { Dimensions: dimensions }), + }, + ], + }) + + await client.send(command) + + return NextResponse.json({ + success: true, + output: { + success: true, + namespace: validatedData.namespace, + metricName: validatedData.metricName, + value: validatedData.value, + unit: validatedData.unit ?? 'None', + timestamp: timestamp.toISOString(), + }, + }) + } catch (error) { + if (error instanceof z.ZodError) { + return NextResponse.json( + { error: error.errors[0]?.message ?? 'Invalid request' }, + { status: 400 } + ) + } + const errorMessage = + error instanceof Error ? error.message : 'Failed to publish CloudWatch metric' + logger.error('PutMetricData failed', { error: errorMessage }) + return NextResponse.json({ error: errorMessage }, { status: 500 }) + } +} diff --git a/apps/sim/app/api/tools/jsm/request/route.ts b/apps/sim/app/api/tools/jsm/request/route.ts index ae5b150b5b8..93f9fa10cf0 100644 --- a/apps/sim/app/api/tools/jsm/request/route.ts +++ b/apps/sim/app/api/tools/jsm/request/route.ts @@ -12,6 +12,20 @@ export const dynamic = 'force-dynamic' const logger = createLogger('JsmRequestAPI') +function parseJsmErrorMessage(status: number, statusText: string, errorText: string): string { + try { + const errorData = JSON.parse(errorText) + if (errorData.errorMessage) { + return `JSM API error: ${errorData.errorMessage}` + } + } catch { + if (errorText) { + return `JSM API error: ${errorText}` + } + } + return `JSM API error: ${status} ${statusText}` +} + export async function POST(request: NextRequest) { const auth = await checkInternalAuth(request) if (!auth.success || !auth.userId) { @@ -31,6 +45,7 @@ export async function POST(request: NextRequest) { description, raiseOnBehalfOf, requestFieldValues, + formAnswers, requestParticipants, channel, expand, @@ -55,7 +70,7 @@ export async function POST(request: NextRequest) { const baseUrl = getJsmApiBaseUrl(cloudId) - const isCreateOperation = serviceDeskId && requestTypeId && summary + const isCreateOperation = serviceDeskId && requestTypeId && (summary || formAnswers) if (isCreateOperation) { const serviceDeskIdValidation = validateAlphanumericId(serviceDeskId, 'serviceDeskId') @@ -69,15 +84,30 @@ export async function POST(request: NextRequest) { } const url = `${baseUrl}/request` - logger.info('Creating request at:', url) + logger.info('Creating request at:', { url, serviceDeskId, requestTypeId }) const requestBody: Record = { serviceDeskId, requestTypeId, - requestFieldValues: requestFieldValues || { - summary, - ...(description && { description }), - }, + } + + if (summary || description || requestFieldValues) { + const fieldValues = + requestFieldValues && typeof requestFieldValues === 'object' + ? { + ...(!requestFieldValues.summary && summary ? { summary } : {}), + ...(!requestFieldValues.description && description ? { description } : {}), + ...requestFieldValues, + } + : { + ...(summary && { summary }), + ...(description && { description }), + } + requestBody.requestFieldValues = fieldValues + } + + if (formAnswers && typeof formAnswers === 'object') { + requestBody.form = { answers: formAnswers } } if (raiseOnBehalfOf) { @@ -112,7 +142,10 @@ export async function POST(request: NextRequest) { }) return NextResponse.json( - { error: `JSM API error: ${response.status} ${response.statusText}`, details: errorText }, + { + error: parseJsmErrorMessage(response.status, response.statusText, errorText), + details: errorText, + }, { status: response.status } ) } @@ -178,7 +211,10 @@ export async function POST(request: NextRequest) { }) return NextResponse.json( - { error: `JSM API error: ${response.status} ${response.statusText}`, details: errorText }, + { + error: parseJsmErrorMessage(response.status, response.statusText, errorText), + details: errorText, + }, { status: response.status } ) } diff --git a/apps/sim/app/api/workflows/[id]/execute/route.ts b/apps/sim/app/api/workflows/[id]/execute/route.ts index 86a4a722eb8..c715ca2a8bc 100644 --- a/apps/sim/app/api/workflows/[id]/execute/route.ts +++ b/apps/sim/app/api/workflows/[id]/execute/route.ts @@ -39,6 +39,7 @@ import { cleanupExecutionBase64Cache, hydrateUserFilesWithBase64, } from '@/lib/uploads/utils/user-file-base64.server' +import { executeWorkflow } from '@/lib/workflows/executor/execute-workflow' import { executeWorkflowCore } from '@/lib/workflows/executor/execution-core' import { type ExecutionEvent, encodeSSEEvent } from '@/lib/workflows/executor/execution-events' import { handlePostExecutionPauseState } from '@/lib/workflows/executor/pause-persistence' @@ -213,6 +214,7 @@ async function handleAsyncExecution(params: AsyncExecutionParams): Promise + executeWorkflow( + streamWorkflow, + requestId, + processedInput, + actorUserId, + { + enabled: true, + selectedOutputs: resolvedSelectedOutputs, + isSecureMode: false, + workflowTriggerType: triggerType === 'chat' ? 'chat' : 'api', + onStream, + onBlockComplete, + skipLoggingComplete: true, + includeFileBase64, + base64MaxBytes, + abortSignal, + executionMode: 'stream', + }, + executionId + ), }) return new NextResponse(stream, { @@ -1310,6 +1333,7 @@ async function handleExecutePost( enforceCredentialAccess: useAuthenticatedUserAsActor, workflowStateOverride: effectiveWorkflowStateOverride, callChain, + executionMode: 'sync', } const sseExecutionVariables = cachedWorkflowData?.variables ?? workflow.variables ?? {} diff --git a/apps/sim/app/workspace/[workspaceId]/home/components/user-input/user-input.tsx b/apps/sim/app/workspace/[workspaceId]/home/components/user-input/user-input.tsx index e02c73bbb9f..981dc9d0e58 100644 --- a/apps/sim/app/workspace/[workspaceId]/home/components/user-input/user-input.tsx +++ b/apps/sim/app/workspace/[workspaceId]/home/components/user-input/user-input.tsx @@ -39,6 +39,7 @@ import { extractContextTokens, } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/utils' import { useWorkflowMap } from '@/hooks/queries/workflows' +import { useSettingsNavigation } from '@/hooks/use-settings-navigation' import { useSpeechToText } from '@/hooks/use-speech-to-text' import type { ChatContext } from '@/stores/panel' @@ -120,6 +121,7 @@ export function UserInput({ onEnterWhileEmpty, }: UserInputProps) { const { workspaceId } = useParams<{ workspaceId: string }>() + const { navigateToSettings } = useSettingsNavigation() const { data: workflowsById = {} } = useWorkflowMap(workspaceId) const { data: session } = useSession() const [value, setValue] = useState(defaultValue) @@ -239,12 +241,19 @@ export function UserInput({ valueRef.current = newVal }, []) + const handleUsageLimitExceeded = useCallback(() => { + navigateToSettings({ section: 'subscription' }) + }, [navigateToSettings]) + const { isListening, isSupported: isSttSupported, toggleListening: rawToggle, resetTranscript, - } = useSpeechToText({ onTranscript: handleTranscript }) + } = useSpeechToText({ + onTranscript: handleTranscript, + onUsageLimitExceeded: handleUsageLimitExceeded, + }) const toggleListening = useCallback(() => { if (!isListening) { diff --git a/apps/sim/app/workspace/[workspaceId]/layout.tsx b/apps/sim/app/workspace/[workspaceId]/layout.tsx index e1f9d815bb3..5fe6c2dbfe1 100644 --- a/apps/sim/app/workspace/[workspaceId]/layout.tsx +++ b/apps/sim/app/workspace/[workspaceId]/layout.tsx @@ -1,5 +1,5 @@ -import { cookies } from 'next/headers' import { ToastProvider } from '@/components/emcn' +import { getSession } from '@/lib/auth' import { NavTour } from '@/app/workspace/[workspaceId]/components/product-tour' import { ImpersonationBanner } from '@/app/workspace/[workspaceId]/impersonation-banner' import { GlobalCommandsProvider } from '@/app/workspace/[workspaceId]/providers/global-commands-provider' @@ -8,22 +8,17 @@ import { SettingsLoader } from '@/app/workspace/[workspaceId]/providers/settings import { WorkspacePermissionsProvider } from '@/app/workspace/[workspaceId]/providers/workspace-permissions-provider' import { WorkspaceScopeSync } from '@/app/workspace/[workspaceId]/providers/workspace-scope-sync' import { Sidebar } from '@/app/workspace/[workspaceId]/w/components/sidebar/sidebar' -import { - BRAND_COOKIE_NAME, - type BrandCache, - BrandingProvider, -} from '@/ee/whitelabeling/components/branding-provider' +import { BrandingProvider } from '@/ee/whitelabeling/components/branding-provider' +import { getOrgWhitelabelSettings } from '@/ee/whitelabeling/org-branding' export default async function WorkspaceLayout({ children }: { children: React.ReactNode }) { - const cookieStore = await cookies() - let initialCache: BrandCache | null = null - try { - const raw = cookieStore.get(BRAND_COOKIE_NAME)?.value - if (raw) initialCache = JSON.parse(decodeURIComponent(raw)) - } catch {} + const session = await getSession() + // The organization plugin is conditionally spread so TS can't infer activeOrganizationId on the base session type. + const orgId = (session?.session as { activeOrganizationId?: string } | null)?.activeOrganizationId + const initialOrgSettings = orgId ? await getOrgWhitelabelSettings(orgId) : null return ( - + diff --git a/apps/sim/app/workspace/[workspaceId]/logs/components/log-details/log-details.tsx b/apps/sim/app/workspace/[workspaceId]/logs/components/log-details/log-details.tsx index 4307d40b5c5..963aba17143 100644 --- a/apps/sim/app/workspace/[workspaceId]/logs/components/log-details/log-details.tsx +++ b/apps/sim/app/workspace/[workspaceId]/logs/components/log-details/log-details.tsx @@ -598,6 +598,24 @@ export const LogDetails = memo(function LogDetails({ {formatCost(log.cost?.output || 0)} + {(() => { + const models = (log.cost as Record)?.models as + | Record + | undefined + const totalToolCost = models + ? Object.values(models).reduce((sum, m) => sum + (m?.toolCost || 0), 0) + : 0 + return totalToolCost > 0 ? ( +
+ + Tool Usage: + + + {formatCost(totalToolCost)} + +
+ ) : null + })()}
@@ -626,7 +644,7 @@ export const LogDetails = memo(function LogDetails({

Total cost includes a base execution charge of{' '} - {formatCost(BASE_EXECUTION_CHARGE)} plus any model usage costs. + {formatCost(BASE_EXECUTION_CHARGE)} plus any model and tool usage costs.

diff --git a/apps/sim/app/workspace/[workspaceId]/settings/[section]/page.tsx b/apps/sim/app/workspace/[workspaceId]/settings/[section]/page.tsx index 36c1f97867a..ca48abef01b 100644 --- a/apps/sim/app/workspace/[workspaceId]/settings/[section]/page.tsx +++ b/apps/sim/app/workspace/[workspaceId]/settings/[section]/page.tsx @@ -16,6 +16,7 @@ const SECTION_TITLES: Record = { subscription: 'Subscription', team: 'Team', sso: 'Single Sign-On', + whitelabeling: 'Whitelabeling', copilot: 'Copilot Keys', mcp: 'MCP Tools', 'custom-tools': 'Custom Tools', diff --git a/apps/sim/app/workspace/[workspaceId]/settings/[section]/settings.tsx b/apps/sim/app/workspace/[workspaceId]/settings/[section]/settings.tsx index 93d1f537be8..4642fc9e843 100644 --- a/apps/sim/app/workspace/[workspaceId]/settings/[section]/settings.tsx +++ b/apps/sim/app/workspace/[workspaceId]/settings/[section]/settings.tsx @@ -161,7 +161,7 @@ const WhitelabelingSettings = dynamic( import('@/ee/whitelabeling/components/whitelabeling-settings').then( (m) => m.WhitelabelingSettings ), - { loading: () => } + { loading: () => , ssr: false } ) interface SettingsPageProps { diff --git a/apps/sim/app/workspace/[workspaceId]/settings/components/general/general.tsx b/apps/sim/app/workspace/[workspaceId]/settings/components/general/general.tsx index 2c3f437224e..b55b588605e 100644 --- a/apps/sim/app/workspace/[workspaceId]/settings/components/general/general.tsx +++ b/apps/sim/app/workspace/[workspaceId]/settings/components/general/general.tsx @@ -387,7 +387,7 @@ export function General() { diff --git a/apps/sim/app/workspace/[workspaceId]/settings/hooks/use-profile-picture-upload.ts b/apps/sim/app/workspace/[workspaceId]/settings/hooks/use-profile-picture-upload.ts index 074285d49fa..04078386354 100644 --- a/apps/sim/app/workspace/[workspaceId]/settings/hooks/use-profile-picture-upload.ts +++ b/apps/sim/app/workspace/[workspaceId]/settings/hooks/use-profile-picture-upload.ts @@ -3,7 +3,7 @@ import { createLogger } from '@sim/logger' const logger = createLogger('ProfilePictureUpload') const MAX_FILE_SIZE = 5 * 1024 * 1024 // 5MB -const ACCEPTED_IMAGE_TYPES = ['image/png', 'image/jpeg', 'image/jpg'] +const ACCEPTED_IMAGE_TYPES = ['image/png', 'image/jpeg', 'image/jpg', 'image/svg+xml'] interface UseProfilePictureUploadProps { onUpload?: (url: string | null) => void @@ -27,21 +27,19 @@ export function useProfilePictureUpload({ const [isUploading, setIsUploading] = useState(false) useEffect(() => { - if (currentImage !== previewUrl) { - if (previewRef.current && previewRef.current !== currentImage) { - URL.revokeObjectURL(previewRef.current) - previewRef.current = null - } - setPreviewUrl(currentImage || null) + if (previewRef.current && previewRef.current !== currentImage) { + URL.revokeObjectURL(previewRef.current) + previewRef.current = null } - }, [currentImage, previewUrl]) + setPreviewUrl(currentImage || null) + }, [currentImage]) const validateFile = useCallback((file: File): string | null => { if (file.size > MAX_FILE_SIZE) { return `File "${file.name}" is too large. Maximum size is 5MB.` } if (!ACCEPTED_IMAGE_TYPES.includes(file.type)) { - return `File "${file.name}" is not a supported image format. Please use PNG or JPEG.` + return `File "${file.name}" is not a supported image format. Please use PNG, JPEG, or SVG.` } return null }, []) diff --git a/apps/sim/app/workspace/[workspaceId]/w/[workflowId]/components/chat/components/output-select/output-select.tsx b/apps/sim/app/workspace/[workspaceId]/w/[workflowId]/components/chat/components/output-select/output-select.tsx index c126816e09d..213fe3d4199 100644 --- a/apps/sim/app/workspace/[workspaceId]/w/[workflowId]/components/chat/components/output-select/output-select.tsx +++ b/apps/sim/app/workspace/[workspaceId]/w/[workflowId]/components/chat/components/output-select/output-select.tsx @@ -38,6 +38,8 @@ const TagIcon: React.FC<{ ) +const EXCLUDED_OUTPUT_TYPES = new Set(['starter', 'start_trigger', 'human_in_the_loop'] as const) + /** * Props for the OutputSelect component */ @@ -121,7 +123,7 @@ export function OutputSelect({ if (blockArray.length === 0) return outputs blockArray.forEach((block: any) => { - if (block.type === 'starter' || !block?.id || !block?.type) return + if (EXCLUDED_OUTPUT_TYPES.has(block.type) || !block?.id || !block?.type) return const blockName = block.name && typeof block.name === 'string' diff --git a/apps/sim/app/workspace/[workspaceId]/w/[workflowId]/utils/auto-layout-utils.ts b/apps/sim/app/workspace/[workspaceId]/w/[workflowId]/utils/auto-layout-utils.ts index 69a15ec7d67..e9a203ef728 100644 --- a/apps/sim/app/workspace/[workspaceId]/w/[workflowId]/utils/auto-layout-utils.ts +++ b/apps/sim/app/workspace/[workspaceId]/w/[workflowId]/utils/auto-layout-utils.ts @@ -4,6 +4,7 @@ import { DEFAULT_LAYOUT_PADDING, DEFAULT_VERTICAL_SPACING, } from '@/lib/workflows/autolayout/constants' +import { mergeSubblockState } from '@/stores/workflows/utils' import { useWorkflowStore } from '@/stores/workflows/workflow/store' const logger = createLogger('AutoLayoutUtils') @@ -109,10 +110,12 @@ export async function applyAutoLayoutAndUpdateStore( return { success: false, error: errorMessage } } - // Update workflow store immediately with new positions + const layoutedBlocks = result.data?.layoutedBlocks || blocks + const mergedBlocks = mergeSubblockState(layoutedBlocks, workflowId) + const newWorkflowState = { ...workflowStore.getWorkflowState(), - blocks: result.data?.layoutedBlocks || blocks, + blocks: mergedBlocks, lastSaved: Date.now(), } @@ -167,9 +170,10 @@ export async function applyAutoLayoutAndUpdateStore( }) // Revert the store changes since database save failed + const revertBlocks = mergeSubblockState(blocks, workflowId) useWorkflowStore.getState().replaceWorkflowState({ ...workflowStore.getWorkflowState(), - blocks, + blocks: revertBlocks, lastSaved: workflowStore.lastSaved, }) diff --git a/apps/sim/app/workspace/[workspaceId]/w/components/sidebar/components/workspace-header/workspace-header.tsx b/apps/sim/app/workspace/[workspaceId]/w/components/sidebar/components/workspace-header/workspace-header.tsx index df36aa26164..cd2337ee4e2 100644 --- a/apps/sim/app/workspace/[workspaceId]/w/components/sidebar/components/workspace-header/workspace-header.tsx +++ b/apps/sim/app/workspace/[workspaceId]/w/components/sidebar/components/workspace-header/workspace-header.tsx @@ -17,6 +17,7 @@ import { ModalFooter, ModalHeader, Plus, + Skeleton, UserPlus, } from '@/components/emcn' import { getDisplayPlanName, isFree } from '@/lib/billing/plan-helpers' @@ -356,14 +357,16 @@ export function WorkspaceHeader({ } }} > -
- {workspaceInitial} -
+ {activeWorkspaceFull ? ( +
+ {workspaceInitial} +
+ ) : ( + + )} {!isCollapsed && ( <> @@ -400,14 +403,18 @@ export function WorkspaceHeader({ ) : ( <>
-
- {workspaceInitial} -
+ {activeWorkspaceFull ? ( +
+ {workspaceInitial} +
+ ) : ( + + )}
{activeWorkspace?.name || 'Loading...'} @@ -580,12 +587,16 @@ export function WorkspaceHeader({ title={activeWorkspace?.name || 'Loading...'} disabled > -
- {workspaceInitial} -
+ {activeWorkspaceFull ? ( +
+ {workspaceInitial} +
+ ) : ( + + )} {!isCollapsed && ( <> diff --git a/apps/sim/background/resume-execution.ts b/apps/sim/background/resume-execution.ts new file mode 100644 index 00000000000..5831b1c1580 --- /dev/null +++ b/apps/sim/background/resume-execution.ts @@ -0,0 +1,77 @@ +import { createLogger } from '@sim/logger' +import { task } from '@trigger.dev/sdk' +import { PauseResumeManager } from '@/lib/workflows/executor/human-in-the-loop-manager' + +const logger = createLogger('TriggerResumeExecution') + +export type ResumeExecutionPayload = { + resumeEntryId: string + resumeExecutionId: string + pausedExecutionId: string + contextId: string + resumeInput: unknown + userId: string + workflowId: string + parentExecutionId: string +} + +export async function executeResumeJob(payload: ResumeExecutionPayload) { + const { resumeExecutionId, pausedExecutionId, contextId, workflowId, parentExecutionId } = payload + + logger.info('Starting background resume execution', { + resumeExecutionId, + pausedExecutionId, + contextId, + workflowId, + parentExecutionId, + }) + + try { + const pausedExecution = await PauseResumeManager.getPausedExecutionById(pausedExecutionId) + if (!pausedExecution) { + throw new Error(`Paused execution not found: ${pausedExecutionId}`) + } + + const result = await PauseResumeManager.startResumeExecution({ + resumeEntryId: payload.resumeEntryId, + resumeExecutionId: payload.resumeExecutionId, + pausedExecution, + contextId: payload.contextId, + resumeInput: payload.resumeInput, + userId: payload.userId, + }) + + logger.info('Background resume execution completed', { + resumeExecutionId, + workflowId, + success: result.success, + status: result.status, + }) + + return { + success: result.success, + workflowId, + executionId: resumeExecutionId, + parentExecutionId, + status: result.status, + output: result.output, + executedAt: new Date().toISOString(), + } + } catch (error) { + logger.error('Background resume execution failed', { + resumeExecutionId, + workflowId, + error: error instanceof Error ? error.message : String(error), + }) + throw error + } +} + +export const resumeExecutionTask = task({ + id: 'resume-execution', + machine: 'medium-1x', + retry: { + maxAttempts: 1, + }, + run: executeResumeJob, +}) diff --git a/apps/sim/background/workflow-execution.ts b/apps/sim/background/workflow-execution.ts index 0990879ba69..794b12d0195 100644 --- a/apps/sim/background/workflow-execution.ts +++ b/apps/sim/background/workflow-execution.ts @@ -44,6 +44,7 @@ export type WorkflowExecutionPayload = { correlation?: AsyncExecutionCorrelation metadata?: Record callChain?: string[] + executionMode?: 'sync' | 'stream' | 'async' } /** @@ -112,6 +113,7 @@ export async function executeWorkflowJob(payload: WorkflowExecutionPayload) { isClientSession: false, callChain: payload.callChain, correlation, + executionMode: payload.executionMode ?? 'async', } const snapshot = new ExecutionSnapshot( diff --git a/apps/sim/blocks/blocks/cloudformation.ts b/apps/sim/blocks/blocks/cloudformation.ts index d99fabebac2..ab5456d4c52 100644 --- a/apps/sim/blocks/blocks/cloudformation.ts +++ b/apps/sim/blocks/blocks/cloudformation.ts @@ -117,6 +117,7 @@ export const CloudFormationBlock: BlockConfig< type: 'short-input', placeholder: '50', condition: { field: 'operation', value: 'describe_stack_events' }, + mode: 'advanced', }, ], tools: { diff --git a/apps/sim/blocks/blocks/cloudwatch.ts b/apps/sim/blocks/blocks/cloudwatch.ts index c68bcf29430..30e5245d141 100644 --- a/apps/sim/blocks/blocks/cloudwatch.ts +++ b/apps/sim/blocks/blocks/cloudwatch.ts @@ -8,6 +8,7 @@ import type { CloudWatchGetLogEventsResponse, CloudWatchGetMetricStatisticsResponse, CloudWatchListMetricsResponse, + CloudWatchPutMetricDataResponse, CloudWatchQueryLogsResponse, } from '@/tools/cloudwatch/types' @@ -19,6 +20,7 @@ export const CloudWatchBlock: BlockConfig< | CloudWatchDescribeAlarmsResponse | CloudWatchListMetricsResponse | CloudWatchGetMetricStatisticsResponse + | CloudWatchPutMetricDataResponse > = { type: 'cloudwatch', name: 'CloudWatch', @@ -27,6 +29,7 @@ export const CloudWatchBlock: BlockConfig< 'Integrate AWS CloudWatch into workflows. Run Log Insights queries, list log groups, retrieve log events, list and get metrics, and monitor alarms. Requires AWS access key and secret access key.', category: 'tools', integrationType: IntegrationType.Analytics, + docsLink: 'https://docs.sim.ai/tools/cloudwatch', tags: ['cloud', 'monitoring'], bgColor: 'linear-gradient(45deg, #B0084D 0%, #FF4F8B 100%)', icon: CloudWatchIcon, @@ -42,6 +45,7 @@ export const CloudWatchBlock: BlockConfig< { label: 'Describe Log Streams', id: 'describe_log_streams' }, { label: 'List Metrics', id: 'list_metrics' }, { label: 'Get Metric Statistics', id: 'get_metric_statistics' }, + { label: 'Publish Metric', id: 'put_metric_data' }, { label: 'Describe Alarms', id: 'describe_alarms' }, ], value: () => 'query_logs', @@ -69,7 +73,6 @@ export const CloudWatchBlock: BlockConfig< password: true, required: true, }, - // Query Logs fields { id: 'logGroupSelector', title: 'Log Group', @@ -124,6 +127,14 @@ Return ONLY the query — no explanations, no markdown code blocks.`, value: ['query_logs', 'get_log_events', 'get_metric_statistics'], }, required: { field: 'operation', value: ['query_logs', 'get_metric_statistics'] }, + wandConfig: { + enabled: true, + prompt: `Generate a Unix epoch timestamp (in seconds) based on the user's description of a point in time. + +Return ONLY the numeric timestamp - no explanations, no quotes, no extra text.`, + placeholder: 'Describe the start time (e.g., "1 hour ago", "beginning of today")...', + generationType: 'timestamp', + }, }, { id: 'endTime', @@ -135,8 +146,15 @@ Return ONLY the query — no explanations, no markdown code blocks.`, value: ['query_logs', 'get_log_events', 'get_metric_statistics'], }, required: { field: 'operation', value: ['query_logs', 'get_metric_statistics'] }, + wandConfig: { + enabled: true, + prompt: `Generate a Unix epoch timestamp (in seconds) based on the user's description of a point in time. + +Return ONLY the numeric timestamp - no explanations, no quotes, no extra text.`, + placeholder: 'Describe the end time (e.g., "now", "end of yesterday")...', + generationType: 'timestamp', + }, }, - // Describe Log Groups fields { id: 'prefix', title: 'Log Group Name Prefix', @@ -144,7 +162,6 @@ Return ONLY the query — no explanations, no markdown code blocks.`, placeholder: '/aws/lambda/', condition: { field: 'operation', value: 'describe_log_groups' }, }, - // Get Log Events / Describe Log Streams — shared log group selector { id: 'logGroupNameSelector', title: 'Log Group', @@ -167,7 +184,6 @@ Return ONLY the query — no explanations, no markdown code blocks.`, required: { field: 'operation', value: ['get_log_events', 'describe_log_streams'] }, mode: 'advanced', }, - // Describe Log Streams — stream prefix filter { id: 'streamPrefix', title: 'Stream Name Prefix', @@ -175,7 +191,6 @@ Return ONLY the query — no explanations, no markdown code blocks.`, placeholder: '2024/03/31/', condition: { field: 'operation', value: 'describe_log_streams' }, }, - // Get Log Events — log stream selector (cascading: depends on log group) { id: 'logStreamNameSelector', title: 'Log Stream', @@ -198,30 +213,92 @@ Return ONLY the query — no explanations, no markdown code blocks.`, required: { field: 'operation', value: 'get_log_events' }, mode: 'advanced', }, - // List Metrics fields { id: 'metricNamespace', title: 'Namespace', type: 'short-input', - placeholder: 'e.g., AWS/EC2, AWS/Lambda, AWS/RDS', - condition: { field: 'operation', value: ['list_metrics', 'get_metric_statistics'] }, - required: { field: 'operation', value: 'get_metric_statistics' }, + placeholder: 'e.g., AWS/EC2, AWS/Lambda, Custom/MyApp', + condition: { + field: 'operation', + value: ['list_metrics', 'get_metric_statistics', 'put_metric_data'], + }, + required: { + field: 'operation', + value: ['get_metric_statistics', 'put_metric_data'], + }, }, { id: 'metricName', title: 'Metric Name', type: 'short-input', - placeholder: 'e.g., CPUUtilization, Invocations', - condition: { field: 'operation', value: ['list_metrics', 'get_metric_statistics'] }, - required: { field: 'operation', value: 'get_metric_statistics' }, + placeholder: 'e.g., CPUUtilization, Invocations, ErrorCount', + condition: { + field: 'operation', + value: ['list_metrics', 'get_metric_statistics', 'put_metric_data'], + }, + required: { + field: 'operation', + value: ['get_metric_statistics', 'put_metric_data'], + }, }, { id: 'recentlyActive', title: 'Recently Active Only', type: 'switch', condition: { field: 'operation', value: 'list_metrics' }, + mode: 'advanced', + }, + { + id: 'metricValue', + title: 'Value', + type: 'short-input', + placeholder: 'e.g., 1, 42.5', + condition: { field: 'operation', value: 'put_metric_data' }, + required: { field: 'operation', value: 'put_metric_data' }, + }, + { + id: 'metricUnit', + title: 'Unit', + type: 'dropdown', + options: [ + { label: 'None', id: 'None' }, + { label: 'Count', id: 'Count' }, + { label: 'Percent', id: 'Percent' }, + { label: 'Seconds', id: 'Seconds' }, + { label: 'Milliseconds', id: 'Milliseconds' }, + { label: 'Microseconds', id: 'Microseconds' }, + { label: 'Bytes', id: 'Bytes' }, + { label: 'Kilobytes', id: 'Kilobytes' }, + { label: 'Megabytes', id: 'Megabytes' }, + { label: 'Gigabytes', id: 'Gigabytes' }, + { label: 'Terabytes', id: 'Terabytes' }, + { label: 'Bits', id: 'Bits' }, + { label: 'Kilobits', id: 'Kilobits' }, + { label: 'Megabits', id: 'Megabits' }, + { label: 'Gigabits', id: 'Gigabits' }, + { label: 'Terabits', id: 'Terabits' }, + { label: 'Bytes/Second', id: 'Bytes/Second' }, + { label: 'Kilobytes/Second', id: 'Kilobytes/Second' }, + { label: 'Megabytes/Second', id: 'Megabytes/Second' }, + { label: 'Gigabytes/Second', id: 'Gigabytes/Second' }, + { label: 'Terabytes/Second', id: 'Terabytes/Second' }, + { label: 'Bits/Second', id: 'Bits/Second' }, + { label: 'Kilobits/Second', id: 'Kilobits/Second' }, + { label: 'Megabits/Second', id: 'Megabits/Second' }, + { label: 'Gigabits/Second', id: 'Gigabits/Second' }, + { label: 'Terabits/Second', id: 'Terabits/Second' }, + { label: 'Count/Second', id: 'Count/Second' }, + ], + value: () => 'None', + condition: { field: 'operation', value: 'put_metric_data' }, + }, + { + id: 'publishDimensions', + title: 'Dimensions', + type: 'table', + columns: ['name', 'value'], + condition: { field: 'operation', value: 'put_metric_data' }, }, - // Get Metric Statistics fields { id: 'metricPeriod', title: 'Period (seconds)', @@ -251,7 +328,6 @@ Return ONLY the query — no explanations, no markdown code blocks.`, columns: ['name', 'value'], condition: { field: 'operation', value: 'get_metric_statistics' }, }, - // Describe Alarms fields { id: 'alarmNamePrefix', title: 'Alarm Name Prefix', @@ -269,6 +345,7 @@ Return ONLY the query — no explanations, no markdown code blocks.`, { label: 'ALARM', id: 'ALARM' }, { label: 'INSUFFICIENT_DATA', id: 'INSUFFICIENT_DATA' }, ], + value: () => '', condition: { field: 'operation', value: 'describe_alarms' }, }, { @@ -280,9 +357,9 @@ Return ONLY the query — no explanations, no markdown code blocks.`, { label: 'Metric Alarm', id: 'MetricAlarm' }, { label: 'Composite Alarm', id: 'CompositeAlarm' }, ], + value: () => '', condition: { field: 'operation', value: 'describe_alarms' }, }, - // Shared limit field { id: 'limit', title: 'Limit', @@ -299,6 +376,7 @@ Return ONLY the query — no explanations, no markdown code blocks.`, 'describe_alarms', ], }, + mode: 'advanced', }, ], tools: { @@ -309,6 +387,7 @@ Return ONLY the query — no explanations, no markdown code blocks.`, 'cloudwatch_describe_log_streams', 'cloudwatch_list_metrics', 'cloudwatch_get_metric_statistics', + 'cloudwatch_put_metric_data', 'cloudwatch_describe_alarms', ], config: { @@ -326,6 +405,8 @@ Return ONLY the query — no explanations, no markdown code blocks.`, return 'cloudwatch_list_metrics' case 'get_metric_statistics': return 'cloudwatch_get_metric_statistics' + case 'put_metric_data': + return 'cloudwatch_put_metric_data' case 'describe_alarms': return 'cloudwatch_describe_alarms' default: @@ -479,6 +560,48 @@ Return ONLY the query — no explanations, no markdown code blocks.`, } } + case 'put_metric_data': { + if (!rest.metricNamespace) { + throw new Error('Namespace is required') + } + if (!rest.metricName) { + throw new Error('Metric name is required') + } + if (rest.metricValue === undefined || rest.metricValue === '') { + throw new Error('Metric value is required') + } + const numericValue = Number(rest.metricValue) + if (!Number.isFinite(numericValue)) { + throw new Error('Metric value must be a finite number') + } + + return { + awsRegion, + awsAccessKeyId, + awsSecretAccessKey, + namespace: rest.metricNamespace, + metricName: rest.metricName, + value: numericValue, + ...(rest.metricUnit && rest.metricUnit !== 'None' && { unit: rest.metricUnit }), + ...(rest.publishDimensions && { + dimensions: (() => { + const dims = rest.publishDimensions + if (typeof dims === 'string') return dims + if (Array.isArray(dims)) { + const obj: Record = {} + for (const row of dims) { + const name = row.cells?.name + const value = row.cells?.value + if (name && value !== undefined) obj[name] = String(value) + } + return JSON.stringify(obj) + } + return JSON.stringify(dims) + })(), + }), + } + } + case 'describe_alarms': return { awsRegion, @@ -518,6 +641,12 @@ Return ONLY the query — no explanations, no markdown code blocks.`, metricPeriod: { type: 'number', description: 'Granularity in seconds' }, metricStatistics: { type: 'string', description: 'Statistic type (Average, Sum, etc.)' }, metricDimensions: { type: 'json', description: 'Metric dimensions (Name/Value pairs)' }, + metricValue: { type: 'number', description: 'Metric value to publish' }, + metricUnit: { type: 'string', description: 'Metric unit (Count, Seconds, Bytes, etc.)' }, + publishDimensions: { + type: 'json', + description: 'Dimensions for published metric (Name/Value pairs)', + }, alarmNamePrefix: { type: 'string', description: 'Alarm name prefix filter' }, stateValue: { type: 'string', @@ -567,5 +696,29 @@ Return ONLY the query — no explanations, no markdown code blocks.`, type: 'array', description: 'CloudWatch alarms with state and configuration', }, + success: { + type: 'boolean', + description: 'Whether the published metric was successful', + }, + namespace: { + type: 'string', + description: 'Metric namespace', + }, + metricName: { + type: 'string', + description: 'Metric name', + }, + value: { + type: 'number', + description: 'Published metric value', + }, + unit: { + type: 'string', + description: 'Metric unit', + }, + timestamp: { + type: 'string', + description: 'Timestamp when metric was published', + }, }, } diff --git a/apps/sim/blocks/blocks/jira_service_management.ts b/apps/sim/blocks/blocks/jira_service_management.ts index d78251cb3c0..68e8a357e9b 100644 --- a/apps/sim/blocks/blocks/jira_service_management.ts +++ b/apps/sim/blocks/blocks/jira_service_management.ts @@ -198,7 +198,6 @@ export const JiraServiceManagementBlock: BlockConfig = { id: 'summary', title: 'Summary', type: 'short-input', - required: true, placeholder: 'Enter request summary', condition: { field: 'operation', value: 'create_request' }, wandConfig: { @@ -238,6 +237,7 @@ Return ONLY the description text - no explanations.`, title: 'Raise on Behalf Of', type: 'short-input', placeholder: 'Account ID to raise request on behalf of', + mode: 'advanced', condition: { field: 'operation', value: 'create_request' }, }, { @@ -245,6 +245,7 @@ Return ONLY the description text - no explanations.`, title: 'Request Participants', type: 'short-input', placeholder: 'Comma-separated account IDs to add as participants', + mode: 'advanced', condition: { field: 'operation', value: 'create_request' }, }, { @@ -252,6 +253,7 @@ Return ONLY the description text - no explanations.`, title: 'Channel', type: 'short-input', placeholder: 'Channel (e.g., portal, email)', + mode: 'advanced', condition: { field: 'operation', value: 'create_request' }, }, { @@ -260,6 +262,16 @@ Return ONLY the description text - no explanations.`, type: 'long-input', placeholder: 'JSON object of field values (e.g., {"summary": "Title", "customfield_10010": "value"})', + mode: 'advanced', + condition: { field: 'operation', value: 'create_request' }, + }, + { + id: 'formAnswers', + title: 'Form Answers', + type: 'long-input', + placeholder: + 'JSON object for form-based request types (e.g., {"summary": {"text": "Title"}, "customfield_10010": {"choices": ["10320"]}})', + mode: 'advanced', condition: { field: 'operation', value: 'create_request' }, }, { @@ -571,8 +583,8 @@ Return ONLY the comment text - no explanations.`, if (!params.requestTypeId) { throw new Error('Request Type ID is required') } - if (!params.summary) { - throw new Error('Summary is required') + if (!params.summary && !params.formAnswers) { + throw new Error('Summary is required (unless using Form Answers)') } return { ...baseParams, @@ -584,7 +596,22 @@ Return ONLY the comment text - no explanations.`, requestParticipants: params.requestParticipants, channel: params.channel, requestFieldValues: params.requestFieldValues - ? JSON.parse(params.requestFieldValues) + ? (() => { + try { + return JSON.parse(params.requestFieldValues) + } catch { + throw new Error('requestFieldValues must be valid JSON') + } + })() + : undefined, + formAnswers: params.formAnswers + ? (() => { + try { + return JSON.parse(params.formAnswers) + } catch { + throw new Error('formAnswers must be valid JSON') + } + })() : undefined, } case 'get_request': @@ -826,6 +853,10 @@ Return ONLY the comment text - no explanations.`, }, channel: { type: 'string', description: 'Channel (e.g., portal, email)' }, requestFieldValues: { type: 'string', description: 'JSON object of request field values' }, + formAnswers: { + type: 'string', + description: 'JSON object of form answers for form-based request types', + }, searchQuery: { type: 'string', description: 'Filter request types by name' }, groupId: { type: 'string', description: 'Filter by request type group ID' }, expand: { type: 'string', description: 'Comma-separated fields to expand' }, diff --git a/apps/sim/components/emcn/components/tooltip/tooltip.tsx b/apps/sim/components/emcn/components/tooltip/tooltip.tsx index fb4608c9584..e8a35718d89 100644 --- a/apps/sim/components/emcn/components/tooltip/tooltip.tsx +++ b/apps/sim/components/emcn/components/tooltip/tooltip.tsx @@ -42,20 +42,20 @@ const Trigger = TooltipPrimitive.Trigger const Content = React.forwardRef< React.ElementRef, React.ComponentPropsWithoutRef ->(({ className, sideOffset = 6, ...props }, ref) => ( +>(({ className, sideOffset = 6, children, ...props }, ref) => ( - {props.children} + {children} @@ -120,22 +120,35 @@ const VIDEO_EXTENSIONS = ['.mp4', '.webm', '.ogg', '.mov'] as const const Preview = ({ src, alt = '', width = 240, height, loop = true, className }: PreviewProps) => { const pathname = src.toLowerCase().split('?')[0].split('#')[0] const isVideo = VIDEO_EXTENSIONS.some((ext) => pathname.endsWith(ext)) + const [isReady, setIsReady] = React.useState(!isVideo) return ( -
+
{isVideo ? ( -