feat: 功能优化

This commit is contained in:
2025-12-15 23:33:02 +08:00
parent 7f7551f74f
commit 870ea10351
36 changed files with 3289 additions and 40 deletions

View File

@@ -0,0 +1,23 @@
---
name: OpenSpec: Apply
description: Implement an approved OpenSpec change and keep tasks in sync.
category: OpenSpec
tags: [openspec, apply]
---
<!-- OPENSPEC:START -->
**Guardrails**
- Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required.
- Keep changes tightly scoped to the requested outcome.
- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
**Steps**
Track these steps as TODOs and complete them one by one.
1. Read `changes/<id>/proposal.md`, `design.md` (if present), and `tasks.md` to confirm scope and acceptance criteria.
2. Work through tasks sequentially, keeping edits minimal and focused on the requested change.
3. Confirm completion before updating statuses—make sure every item in `tasks.md` is finished.
4. Update the checklist after all work is done so each task is marked `- [x]` and reflects reality.
5. Reference `openspec list` or `openspec show <item>` when additional context is required.
**Reference**
- Use `openspec show <id> --json --deltas-only` if you need additional context from the proposal while implementing.
<!-- OPENSPEC:END -->

View File

@@ -0,0 +1,27 @@
---
name: OpenSpec: Archive
description: Archive a deployed OpenSpec change and update specs.
category: OpenSpec
tags: [openspec, archive]
---
<!-- OPENSPEC:START -->
**Guardrails**
- Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required.
- Keep changes tightly scoped to the requested outcome.
- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
**Steps**
1. Determine the change ID to archive:
- If this prompt already includes a specific change ID (for example inside a `<ChangeId>` block populated by slash-command arguments), use that value after trimming whitespace.
- If the conversation references a change loosely (for example by title or summary), run `openspec list` to surface likely IDs, share the relevant candidates, and confirm which one the user intends.
- Otherwise, review the conversation, run `openspec list`, and ask the user which change to archive; wait for a confirmed change ID before proceeding.
- If you still cannot identify a single change ID, stop and tell the user you cannot archive anything yet.
2. Validate the change ID by running `openspec list` (or `openspec show <id>`) and stop if the change is missing, already archived, or otherwise not ready to archive.
3. Run `openspec archive <id> --yes` so the CLI moves the change and applies spec updates without prompts (use `--skip-specs` only for tooling-only work).
4. Review the command output to confirm the target specs were updated and the change landed in `changes/archive/`.
5. Validate with `openspec validate --strict` and inspect with `openspec show <id>` if anything looks off.
**Reference**
- Use `openspec list` to confirm change IDs before archiving.
- Inspect refreshed specs with `openspec list --specs` and address any validation issues before handing off.
<!-- OPENSPEC:END -->

View File

@@ -0,0 +1,28 @@
---
name: OpenSpec: Proposal
description: Scaffold a new OpenSpec change and validate strictly.
category: OpenSpec
tags: [openspec, change]
---
<!-- OPENSPEC:START -->
**Guardrails**
- Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required.
- Keep changes tightly scoped to the requested outcome.
- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
- Identify any vague or ambiguous details and ask the necessary follow-up questions before editing files.
- Do not write any code during the proposal stage. Only create design documents (proposal.md, tasks.md, design.md, and spec deltas). Implementation happens in the apply stage after approval.
**Steps**
1. Review `openspec/project.md`, run `openspec list` and `openspec list --specs`, and inspect related code or docs (e.g., via `rg`/`ls`) to ground the proposal in current behaviour; note any gaps that require clarification.
2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, and `design.md` (when needed) under `openspec/changes/<id>/`.
3. Map the change into concrete capabilities or requirements, breaking multi-scope efforts into distinct spec deltas with clear relationships and sequencing.
4. Capture architectural reasoning in `design.md` when the solution spans multiple systems, introduces new patterns, or demands trade-off discussion before committing to specs.
5. Draft spec deltas in `changes/<id>/specs/<capability>/spec.md` (one folder per capability) using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement and cross-reference related capabilities when relevant.
6. Draft `tasks.md` as an ordered list of small, verifiable work items that deliver user-visible progress, include validation (tests, tooling), and highlight dependencies or parallelizable work.
7. Validate with `openspec validate <id> --strict` and resolve every issue before sharing the proposal.
**Reference**
- Use `openspec show <id> --json --deltas-only` or `openspec show <spec> --type spec` to inspect details when validation fails.
- Search existing requirements with `rg -n "Requirement:|Scenario:" openspec/specs` before writing new ones.
- Explore the codebase with `rg <keyword>`, `ls`, or direct file reads so proposals align with current implementation realities.
<!-- OPENSPEC:END -->

View File

@@ -17,7 +17,8 @@
"Bash(ls:*)",
"Bash(mysql:*)",
"Bash(npm run lint:*)",
"Bash(npx vue-tsc:*)"
"Bash(npx vue-tsc:*)",
"Bash(pnpm add:*)"
],
"deny": [],
"ask": []

18
AGENTS.md Normal file
View File

@@ -0,0 +1,18 @@
<!-- OPENSPEC:START -->
# OpenSpec Instructions
These instructions are for AI assistants working in this project.
Always open `@/openspec/AGENTS.md` when the request:
- Mentions planning or proposals (words like proposal, spec, change, plan)
- Introduces new capabilities, breaking changes, architecture shifts, or big performance/security work
- Sounds ambiguous and you need the authoritative spec before coding
Use `@/openspec/AGENTS.md` to learn:
- How to create and apply change proposals
- Spec format and conventions
- Project structure and guidelines
Keep this managed block so 'openspec update' can refresh the instructions.
<!-- OPENSPEC:END -->

View File

@@ -1,3 +1,22 @@
<!-- OPENSPEC:START -->
# OpenSpec Instructions
These instructions are for AI assistants working in this project.
Always open `@/openspec/AGENTS.md` when the request:
- Mentions planning or proposals (words like proposal, spec, change, plan)
- Introduces new capabilities, breaking changes, architecture shifts, or big performance/security work
- Sounds ambiguous and you need the authoritative spec before coding
Use `@/openspec/AGENTS.md` to learn:
- How to create and apply change proposals
- Spec format and conventions
- Project structure and guidelines
Keep this managed block so 'openspec update' can refresh the instructions.
<!-- OPENSPEC:END -->
# CLAUDE.md
本文档为 Claude Code (claude.ai/code) 在此仓库中处理代码提供指导。请始终用中文沟通

View File

@@ -43,6 +43,7 @@
"eslint-plugin-oxlint": "~1.11.0",
"eslint-plugin-vue": "~10.4.0",
"globals": "^16.3.0",
"less": "^4.4.2",
"normalize.css": "^8.0.1",
"npm-run-all2": "^8.0.4",
"oxlint": "~1.11.0",

View File

@@ -9,6 +9,39 @@ import { API_BASE } from '@gold/config/api'
// 使用 webApi 前缀,确保能够被代理
const BASE_URL = `${API_BASE.APP_TIK}/file`
/**
* 获取视频时长(秒)
* @param {File} file - 视频文件对象
* @returns {Promise<number>} 时长(秒)
*/
function getVideoDuration(file) {
return new Promise((resolve, reject) => {
// 只处理视频文件
if (!file.type.startsWith('video/')) {
resolve(null);
return;
}
const video = document.createElement('video');
video.preload = 'metadata';
video.muted = true; // 静音,避免浏览器阻止自动播放
video.onloadedmetadata = function() {
const duration = Math.round(video.duration);
URL.revokeObjectURL(video.src);
resolve(duration);
};
video.onerror = function() {
URL.revokeObjectURL(video.src);
console.warn('[视频时长] 获取失败使用默认值60秒');
resolve(60); // 返回默认值
};
video.src = URL.createObjectURL(file);
});
}
/**
* 素材库 API 服务
*/
@@ -34,20 +67,33 @@ export const MaterialService = {
* @param {File} file - 文件对象
* @param {string} fileCategory - 文件分类video/generate/audio/mix/voice
* @param {string} coverBase64 - 视频封面 base64可选data URI 格式)
* @param {number} duration - 视频时长(秒,可选,自动获取)
* @returns {Promise}
*/
uploadFile(file, fileCategory, coverBase64 = null) {
async uploadFile(file, fileCategory, coverBase64 = null, duration = null) {
// 如果没有提供时长且是视频文件,自动获取
if (duration === null && file.type.startsWith('video/')) {
duration = await getVideoDuration(file);
console.log('[上传] 获取到视频时长:', duration, '秒');
}
const formData = new FormData()
formData.append('file', file)
formData.append('fileCategory', fileCategory)
// 添加时长(如果是视频文件)
if (duration !== null) {
formData.append('duration', duration.toString());
console.log('[上传] 附加视频时长:', duration, '秒');
}
// 如果有封面 base64添加到表单数据
if (coverBase64) {
// base64 格式data:image/jpeg;base64,/9j/4AAQ...
// 后端会解析这个格式
formData.append('coverBase64', coverBase64)
}
// 大文件上传需要更长的超时时间30分钟
return http.post(`${BASE_URL}/upload`, formData, {
timeout: 30 * 60 * 1000 // 30分钟

View File

@@ -44,7 +44,8 @@ const items = computed(() => {
title: '素材库',
children: [
{ path: '/material/list', label: '素材列表', icon: 'grid' },
{ path: '/material/mix-task', label: '混剪任务', icon: 'scissors' },
{ path: '/material/mix', label: '智能混剪', icon: 'scissors' },
{ path: '/material/mix-task', label: '混剪任务', icon: 'video' },
{ path: '/material/group', label: '素材分组', icon: 'folder' },
]
},

View File

@@ -55,6 +55,7 @@ const routes = [
children: [
{ path: '', redirect: '/material/list' },
{ path: 'list', name: '素材列表', component: () => import('../views/material/MaterialList.vue') },
{ path: 'mix', name: '智能混剪', component: () => import('../views/material/Mix.vue') },
{ path: 'mix-task', name: '混剪任务', component: () => import('../views/material/MixTaskList.vue') },
{ path: 'group', name: '素材分组', component: () => import('../views/material/MaterialGroup.vue') },
]

View File

@@ -20,8 +20,7 @@
<a-button
type="primary"
ghost
@click="handleOpenMixModal"
:disabled="groupList.length === 0"
@click="$router.push('/material/mix')"
>
素材混剪
</a-button>

View File

@@ -0,0 +1,785 @@
<template>
<div class="mix-page">
<!-- 页面头部 -->
<div class="mix-page__header">
<h1 class="mix-page__title">智能混剪</h1>
<a-button @click="$router.push('/material/list')">
<template #icon><ArrowLeftOutlined /></template>
返回素材列表
</a-button>
</div>
<div class="mix-page__content">
<!-- 左侧参数配置 -->
<div class="mix-page__params">
<a-card title="混剪参数" :bordered="false">
<a-form layout="vertical">
<!-- 分组选择 -->
<a-form-item label="选择素材分组" required>
<a-select
v-model:value="formData.groupId"
placeholder="请选择素材分组"
:loading="loadingGroups"
@change="handleGroupChange"
>
<a-select-option v-for="g in groupList" :key="g.id" :value="g.id">
{{ g.name }}
</a-select-option>
</a-select>
</a-form-item>
<!-- 视频标题 -->
<a-form-item label="视频标题" required>
<a-input
v-model:value="formData.title"
placeholder="请输入生成视频的标题"
:maxlength="50"
show-count
/>
</a-form-item>
<!-- 生成数量 -->
<a-form-item label="生成数量">
<a-radio-group v-model:value="formData.produceCount" button-style="solid">
<a-radio-button :value="1">1</a-radio-button>
<a-radio-button :value="2">2</a-radio-button>
<a-radio-button :value="3">3</a-radio-button>
</a-radio-group>
</a-form-item>
<!-- 成品总时长 -->
<a-form-item label="成品总时长">
<div class="mix-page__slider-box">
<a-slider
v-model:value="formData.totalDuration"
:min="15"
:max="30"
:step="1"
:marks="{ 15: '15s', 20: '20s', 25: '25s', 30: '30s' }"
/>
<div class="slider-value">{{ formData.totalDuration }}</div>
</div>
</a-form-item>
<!-- 单切片时长 -->
<a-form-item label="单切片时长">
<div class="mix-page__slider-box">
<a-slider
v-model:value="formData.clipDuration"
:min="3"
:max="5"
:step="1"
:marks="{ 3: '3s', 4: '4s', 5: '5s' }"
/>
<div class="slider-value">{{ formData.clipDuration }}</div>
</div>
</a-form-item>
<!-- 裁剪模式 -->
<a-form-item label="裁剪模式">
<a-radio-group v-model:value="formData.cropMode" button-style="solid">
<a-radio-button value="center" class="crop-btn">
居中裁剪
</a-radio-button>
<a-radio-button value="fill" class="crop-btn">
填充模式
</a-radio-button>
</a-radio-group>
</a-form-item>
<!-- 自动计算的场景数 -->
<div class="mix-page__scene-info">
<div class="scene-row">
<span>场景数</span>
<strong>{{ sceneCount }} </strong>
</div>
<div class="scene-row">
<span>实际总时长</span>
<strong>{{ actualTotalDuration }}</strong>
</div>
<div class="scene-row">
<span>已填充</span>
<strong :class="{ 'text-green': filledCount === sceneCount }">
{{ filledCount }} / {{ sceneCount }}
</strong>
</div>
</div>
<!-- 一键填充按钮 -->
<a-button
block
size="large"
style="margin-bottom: 12px"
:disabled="!groupFiles.length"
@click="autoFillScenes"
>
<template #icon><ThunderboltOutlined /></template>
一键填充
</a-button>
<a-button
type="primary"
block
size="large"
:loading="submitting"
:disabled="!canSubmit"
@click="handleSubmit"
>
<template #icon><RocketOutlined /></template>
开始混剪
</a-button>
</a-form>
</a-card>
</div>
<!-- 右侧场景格子 + 素材列表 -->
<div class="mix-page__preview">
<!-- 场景格子区域 -->
<a-card title="场景编排" :bordered="false" style="margin-bottom: 16px">
<template #extra>
<a-button size="small" @click="clearScenes">清空</a-button>
</template>
<div class="mix-page__scenes">
<div
v-for="(scene, index) in scenes"
:key="index"
class="mix-page__scene"
:class="{ 'mix-page__scene--filled': scene.fileId }"
@click="openSceneSelector(index)"
>
<!-- 场景序号 -->
<span class="scene-index">{{ index + 1 }}</span>
<!-- 已填充显示封面 -->
<template v-if="scene.fileId">
<img
v-if="getFileById(scene.fileId)?.coverBase64"
:src="getFileById(scene.fileId).coverBase64"
class="scene-thumb"
/>
<div v-else class="scene-placeholder filled">
<VideoCameraOutlined />
</div>
<div class="scene-name">{{ getFileById(scene.fileId)?.fileName }}</div>
<a-button
class="scene-remove"
type="text"
size="small"
danger
@click.stop="removeScene(index)"
>
<CloseOutlined />
</a-button>
</template>
<!-- 未填充空白格子 -->
<template v-else>
<div class="scene-placeholder">
<PlusOutlined />
</div>
<div class="scene-hint">点击选择</div>
</template>
<!-- 时长标签 -->
<span class="scene-duration">{{ formData.clipDuration }}s</span>
</div>
</div>
</a-card>
<!-- 素材库 -->
<a-card title="素材库" :bordered="false">
<a-spin :spinning="loadingFiles">
<div v-if="groupFiles.length > 0" class="mix-page__grid">
<div
v-for="file in groupFiles"
:key="file.id"
class="mix-page__item"
:class="{ 'mix-page__item--used': isFileUsed(file.id) }"
@click="handleFileClick(file)"
>
<!-- 封面图 -->
<div class="mix-page__thumb">
<img v-if="file.isVideo && file.coverBase64" :src="file.coverBase64" :alt="file.fileName" />
<div v-else class="mix-page__placeholder">
<VideoCameraOutlined />
</div>
</div>
<!-- 已使用标记 -->
<span v-if="isFileUsed(file.id)" class="mix-page__used-badge">
已使用 ×{{ getFileUsageCount(file.id) }}
</span>
<!-- 文件名 -->
<div class="mix-page__name" :title="file.fileName">
{{ file.fileName }}
</div>
</div>
</div>
<a-empty v-else description="请先选择素材分组" />
</a-spin>
</a-card>
</div>
</div>
<!-- 素材选择弹窗 -->
<a-modal
v-model:open="selectorVisible"
title="选择素材"
:footer="null"
width="600px"
>
<div class="mix-page__selector-grid">
<div
v-for="file in groupFiles"
:key="file.id"
class="mix-page__selector-item"
@click="selectFileForScene(file)"
>
<div class="selector-thumb">
<img v-if="file.isVideo && file.coverBase64" :src="file.coverBase64" />
<VideoCameraOutlined v-else />
</div>
<div class="selector-name">{{ file.fileName }}</div>
</div>
</div>
</a-modal>
</div>
</template>
<script setup>
import { ref, computed, watch, onMounted } from 'vue'
import { message } from 'ant-design-vue'
import { useRouter } from 'vue-router'
import {
ArrowLeftOutlined,
RocketOutlined,
VideoCameraOutlined,
PlusOutlined,
CloseOutlined,
ThunderboltOutlined
} from '@ant-design/icons-vue'
import { MaterialService, MaterialGroupService } from '@/api/material'
import { MixTaskService } from '@/api/mixTask'
const router = useRouter()
// 表单数据
const formData = ref({
groupId: null,
title: '',
produceCount: 3,
totalDuration: 15, // 成品总时长 15-30s
clipDuration: 3, // 单切片时长 3-5s
cropMode: 'center' // 裁剪模式,默认居中裁剪
})
// 状态
const loadingGroups = ref(false)
const loadingFiles = ref(false)
const submitting = ref(false)
const selectorVisible = ref(false)
const currentSceneIndex = ref(-1)
// 分组和文件
const groupList = ref([])
const groupFiles = ref([])
// 场景列表 [{ fileId, fileUrl }, ...]
const scenes = ref([])
// 计算场景数 = 总时长 / 单切片时长
const sceneCount = computed(() => {
return Math.floor(formData.value.totalDuration / formData.value.clipDuration)
})
// 实际总时长 = 场景数 × 单切片时长
const actualTotalDuration = computed(() => {
return sceneCount.value * formData.value.clipDuration
})
// 已填充数量
const filledCount = computed(() => {
return scenes.value.filter(s => s.fileId).length
})
// 监听场景数变化,自动调整场景数组
watch(sceneCount, (newCount) => {
const current = scenes.value.length
if (newCount > current) {
// 增加空场景
for (let i = current; i < newCount; i++) {
scenes.value.push({ fileId: null, fileUrl: null })
}
} else if (newCount < current) {
// 减少场景
scenes.value = scenes.value.slice(0, newCount)
}
}, { immediate: true })
// 根据fileId获取文件信息
const getFileById = (fileId) => {
return groupFiles.value.find(f => f.id === fileId)
}
// 检查文件是否已使用
const isFileUsed = (fileId) => {
return scenes.value.some(s => s.fileId === fileId)
}
// 获取文件使用次数
const getFileUsageCount = (fileId) => {
return scenes.value.filter(s => s.fileId === fileId).length
}
// 加载分组列表
const loadGroups = async () => {
loadingGroups.value = true
try {
const res = await MaterialGroupService.getGroupList()
if (res.code === 0) {
groupList.value = res.data || []
}
} catch (error) {
message.error('加载分组失败')
} finally {
loadingGroups.value = false
}
}
// 分组变更时加载素材
const handleGroupChange = async (groupId) => {
if (!groupId) {
groupFiles.value = []
clearScenes()
return
}
loadingFiles.value = true
try {
const res = await MaterialService.getFilePage({
groupId,
fileCategory: 'video',
pageNo: 1,
pageSize: 50
})
if (res.code === 0) {
groupFiles.value = res.data.list || []
clearScenes()
}
} catch (error) {
message.error('加载素材失败')
} finally {
loadingFiles.value = false
}
}
// 打开场景选择器
const openSceneSelector = (index) => {
currentSceneIndex.value = index
selectorVisible.value = true
}
// 为场景选择文件
const selectFileForScene = (file) => {
if (currentSceneIndex.value >= 0 && currentSceneIndex.value < scenes.value.length) {
scenes.value[currentSceneIndex.value] = {
fileId: file.id,
fileUrl: file.fileUrl
}
}
selectorVisible.value = false
}
// 点击素材库文件:填充到第一个空场景
const handleFileClick = (file) => {
const emptyIndex = scenes.value.findIndex(s => !s.fileId)
if (emptyIndex >= 0) {
scenes.value[emptyIndex] = {
fileId: file.id,
fileUrl: file.fileUrl
}
} else {
message.info('所有场景已填满')
}
}
// 移除场景
const removeScene = (index) => {
scenes.value[index] = { fileId: null, fileUrl: null }
}
// 清空场景
const clearScenes = () => {
scenes.value = Array(sceneCount.value).fill(null).map(() => ({ fileId: null, fileUrl: null }))
}
// 一键填充:随机分配素材到空场景
const autoFillScenes = () => {
if (!groupFiles.value.length) {
message.warning('请先选择素材分组')
return
}
// 打乱素材顺序
const shuffled = [...groupFiles.value].sort(() => Math.random() - 0.5)
let fileIndex = 0
// 填充每个空场景
scenes.value = scenes.value.map(scene => {
if (!scene.fileId) {
const file = shuffled[fileIndex % shuffled.length]
fileIndex++
return { fileId: file.id, fileUrl: file.fileUrl }
}
return scene
})
message.success('已随机填充所有场景')
}
// 是否可提交
const canSubmit = computed(() => {
return formData.value.groupId &&
formData.value.title.trim() &&
filledCount.value === sceneCount.value
})
// 提交混剪
const handleSubmit = async () => {
if (!canSubmit.value) return
submitting.value = true
try {
// 构建素材列表(带上素材实际时长 fileDuration
const materials = scenes.value.map(scene => {
const file = getFileById(scene.fileId)
return {
fileId: scene.fileId,
fileUrl: scene.fileUrl,
duration: formData.value.clipDuration,
fileDuration: file?.duration || null // 素材实际时长
}
})
const res = await MixTaskService.createTask({
title: formData.value.title,
materials: materials,
produceCount: formData.value.produceCount,
cropMode: formData.value.cropMode
})
if (res.code === 0) {
message.success('混剪任务创建成功!')
router.push('/material/mix-task')
}
} catch (error) {
message.error('提交失败:' + error.message)
} finally {
submitting.value = false
}
}
onMounted(() => {
loadGroups()
})
</script>
<style scoped lang="less">
.mix-page {
padding: 24px;
background: var(--color-bg-2);
min-height: 100vh;
&__header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 24px;
}
&__title {
font-size: 24px;
font-weight: 600;
margin: 0;
}
&__content {
display: flex;
gap: 24px;
}
&__params {
width: 320px;
flex-shrink: 0;
.ant-card {
position: sticky;
top: 24px;
}
}
&__preview {
flex: 1;
min-width: 0;
}
&__slider-box {
.slider-value {
text-align: center;
margin-top: 8px;
font-size: 16px;
font-weight: 600;
color: #1890ff;
}
}
&__scene-info {
display: flex;
flex-direction: column;
gap: 12px;
margin-bottom: 16px;
padding: 16px;
background: var(--color-bg-3);
border-radius: 8px;
.scene-row {
display: flex;
justify-content: space-between;
align-items: center;
span {
color: #666;
font-size: 14px;
}
strong {
color: #333;
font-size: 16px;
&.text-green {
color: #52c41a;
}
}
}
}
// 场景格子样式
&__scenes {
display: flex;
flex-wrap: wrap;
gap: 12px;
}
&__scene {
position: relative;
width: 120px;
height: 100px;
border: 2px dashed #d9d9d9;
border-radius: 8px;
cursor: pointer;
transition: all 0.2s;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
background: #fafafa;
&:hover {
border-color: #1890ff;
background: #f0f7ff;
}
&--filled {
border-style: solid;
border-color: #1890ff;
background: #fff;
}
.scene-index {
position: absolute;
top: 4px;
left: 4px;
width: 20px;
height: 20px;
background: rgba(0, 0, 0, 0.5);
color: #fff;
border-radius: 4px;
font-size: 12px;
display: flex;
align-items: center;
justify-content: center;
z-index: 2;
}
.scene-thumb {
width: 100%;
height: 60px;
object-fit: cover;
border-radius: 4px 4px 0 0;
}
.scene-placeholder {
font-size: 24px;
color: #bfbfbf;
&.filled {
color: #1890ff;
}
}
.scene-hint {
font-size: 12px;
color: #999;
margin-top: 4px;
}
.scene-name {
font-size: 11px;
color: #333;
padding: 4px;
text-align: center;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
width: 100%;
}
.scene-duration {
position: absolute;
bottom: 4px;
right: 4px;
background: rgba(0, 0, 0, 0.6);
color: #fff;
padding: 2px 6px;
border-radius: 4px;
font-size: 10px;
}
.scene-remove {
position: absolute;
top: 2px;
right: 2px;
z-index: 3;
}
}
// 素材库格子样式
&__grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(120px, 1fr));
gap: 12px;
}
&__item {
position: relative;
border-radius: 8px;
overflow: hidden;
cursor: pointer;
transition: all 0.2s;
border: 2px solid transparent;
background: #fff;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.06);
&:hover {
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
border-color: #1890ff;
}
&--used {
opacity: 0.6;
}
}
&__thumb {
aspect-ratio: 16 / 9;
background: #f0f0f0;
overflow: hidden;
img {
width: 100%;
height: 100%;
object-fit: cover;
}
}
&__placeholder {
width: 100%;
height: 100%;
display: flex;
align-items: center;
justify-content: center;
color: #bfbfbf;
font-size: 24px;
}
&__used-badge {
position: absolute;
top: 4px;
right: 4px;
background: #1890ff;
color: #fff;
padding: 2px 6px;
border-radius: 4px;
font-size: 10px;
}
&__name {
padding: 8px;
font-size: 12px;
color: #333;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
// 选择器弹窗样式
&__selector-grid {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 12px;
max-height: 400px;
overflow-y: auto;
}
&__selector-item {
cursor: pointer;
border-radius: 8px;
overflow: hidden;
border: 2px solid transparent;
transition: all 0.2s;
&:hover {
border-color: #1890ff;
}
.selector-thumb {
aspect-ratio: 16 / 9;
background: #f0f0f0;
display: flex;
align-items: center;
justify-content: center;
color: #bfbfbf;
font-size: 24px;
img {
width: 100%;
height: 100%;
object-fit: cover;
}
}
.selector-name {
padding: 6px;
font-size: 11px;
text-align: center;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
}
}
</style>

View File

@@ -11,9 +11,9 @@
"license": "ISC",
"description": "",
"dependencies": {
"axios": "^1.12.2",
"localforage": "^1.10.0",
"unocss": "^66.5.4",
"axios": "^1.12.2",
"web-storage-cache": "^1.1.1"
}
}

456
openspec/AGENTS.md Normal file
View File

@@ -0,0 +1,456 @@
# OpenSpec Instructions
Instructions for AI coding assistants using OpenSpec for spec-driven development.
## TL;DR Quick Checklist
- Search existing work: `openspec spec list --long`, `openspec list` (use `rg` only for full-text search)
- Decide scope: new capability vs modify existing capability
- Pick a unique `change-id`: kebab-case, verb-led (`add-`, `update-`, `remove-`, `refactor-`)
- Scaffold: `proposal.md`, `tasks.md`, `design.md` (only if needed), and delta specs per affected capability
- Write deltas: use `## ADDED|MODIFIED|REMOVED|RENAMED Requirements`; include at least one `#### Scenario:` per requirement
- Validate: `openspec validate [change-id] --strict` and fix issues
- Request approval: Do not start implementation until proposal is approved
## Three-Stage Workflow
### Stage 1: Creating Changes
Create proposal when you need to:
- Add features or functionality
- Make breaking changes (API, schema)
- Change architecture or patterns
- Optimize performance (changes behavior)
- Update security patterns
Triggers (examples):
- "Help me create a change proposal"
- "Help me plan a change"
- "Help me create a proposal"
- "I want to create a spec proposal"
- "I want to create a spec"
Loose matching guidance:
- Contains one of: `proposal`, `change`, `spec`
- With one of: `create`, `plan`, `make`, `start`, `help`
Skip proposal for:
- Bug fixes (restore intended behavior)
- Typos, formatting, comments
- Dependency updates (non-breaking)
- Configuration changes
- Tests for existing behavior
**Workflow**
1. Review `openspec/project.md`, `openspec list`, and `openspec list --specs` to understand current context.
2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, optional `design.md`, and spec deltas under `openspec/changes/<id>/`.
3. Draft spec deltas using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement.
4. Run `openspec validate <id> --strict` and resolve any issues before sharing the proposal.
### Stage 2: Implementing Changes
Track these steps as TODOs and complete them one by one.
1. **Read proposal.md** - Understand what's being built
2. **Read design.md** (if exists) - Review technical decisions
3. **Read tasks.md** - Get implementation checklist
4. **Implement tasks sequentially** - Complete in order
5. **Confirm completion** - Ensure every item in `tasks.md` is finished before updating statuses
6. **Update checklist** - After all work is done, set every task to `- [x]` so the list reflects reality
7. **Approval gate** - Do not start implementation until the proposal is reviewed and approved
### Stage 3: Archiving Changes
After deployment, create separate PR to:
- Move `changes/[name]/``changes/archive/YYYY-MM-DD-[name]/`
- Update `specs/` if capabilities changed
- Use `openspec archive <change-id> --skip-specs --yes` for tooling-only changes (always pass the change ID explicitly)
- Run `openspec validate --strict` to confirm the archived change passes checks
## Before Any Task
**Context Checklist:**
- [ ] Read relevant specs in `specs/[capability]/spec.md`
- [ ] Check pending changes in `changes/` for conflicts
- [ ] Read `openspec/project.md` for conventions
- [ ] Run `openspec list` to see active changes
- [ ] Run `openspec list --specs` to see existing capabilities
**Before Creating Specs:**
- Always check if capability already exists
- Prefer modifying existing specs over creating duplicates
- Use `openspec show [spec]` to review current state
- If request is ambiguous, ask 12 clarifying questions before scaffolding
### Search Guidance
- Enumerate specs: `openspec spec list --long` (or `--json` for scripts)
- Enumerate changes: `openspec list` (or `openspec change list --json` - deprecated but available)
- Show details:
- Spec: `openspec show <spec-id> --type spec` (use `--json` for filters)
- Change: `openspec show <change-id> --json --deltas-only`
- Full-text search (use ripgrep): `rg -n "Requirement:|Scenario:" openspec/specs`
## Quick Start
### CLI Commands
```bash
# Essential commands
openspec list # List active changes
openspec list --specs # List specifications
openspec show [item] # Display change or spec
openspec validate [item] # Validate changes or specs
openspec archive <change-id> [--yes|-y] # Archive after deployment (add --yes for non-interactive runs)
# Project management
openspec init [path] # Initialize OpenSpec
openspec update [path] # Update instruction files
# Interactive mode
openspec show # Prompts for selection
openspec validate # Bulk validation mode
# Debugging
openspec show [change] --json --deltas-only
openspec validate [change] --strict
```
### Command Flags
- `--json` - Machine-readable output
- `--type change|spec` - Disambiguate items
- `--strict` - Comprehensive validation
- `--no-interactive` - Disable prompts
- `--skip-specs` - Archive without spec updates
- `--yes`/`-y` - Skip confirmation prompts (non-interactive archive)
## Directory Structure
```
openspec/
├── project.md # Project conventions
├── specs/ # Current truth - what IS built
│ └── [capability]/ # Single focused capability
│ ├── spec.md # Requirements and scenarios
│ └── design.md # Technical patterns
├── changes/ # Proposals - what SHOULD change
│ ├── [change-name]/
│ │ ├── proposal.md # Why, what, impact
│ │ ├── tasks.md # Implementation checklist
│ │ ├── design.md # Technical decisions (optional; see criteria)
│ │ └── specs/ # Delta changes
│ │ └── [capability]/
│ │ └── spec.md # ADDED/MODIFIED/REMOVED
│ └── archive/ # Completed changes
```
## Creating Change Proposals
### Decision Tree
```
New request?
├─ Bug fix restoring spec behavior? → Fix directly
├─ Typo/format/comment? → Fix directly
├─ New feature/capability? → Create proposal
├─ Breaking change? → Create proposal
├─ Architecture change? → Create proposal
└─ Unclear? → Create proposal (safer)
```
### Proposal Structure
1. **Create directory:** `changes/[change-id]/` (kebab-case, verb-led, unique)
2. **Write proposal.md:**
```markdown
# Change: [Brief description of change]
## Why
[1-2 sentences on problem/opportunity]
## What Changes
- [Bullet list of changes]
- [Mark breaking changes with **BREAKING**]
## Impact
- Affected specs: [list capabilities]
- Affected code: [key files/systems]
```
3. **Create spec deltas:** `specs/[capability]/spec.md`
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL provide...
#### Scenario: Success case
- **WHEN** user performs action
- **THEN** expected result
## MODIFIED Requirements
### Requirement: Existing Feature
[Complete modified requirement]
## REMOVED Requirements
### Requirement: Old Feature
**Reason**: [Why removing]
**Migration**: [How to handle]
```
If multiple capabilities are affected, create multiple delta files under `changes/[change-id]/specs/<capability>/spec.md`—one per capability.
4. **Create tasks.md:**
```markdown
## 1. Implementation
- [ ] 1.1 Create database schema
- [ ] 1.2 Implement API endpoint
- [ ] 1.3 Add frontend component
- [ ] 1.4 Write tests
```
5. **Create design.md when needed:**
Create `design.md` if any of the following apply; otherwise omit it:
- Cross-cutting change (multiple services/modules) or a new architectural pattern
- New external dependency or significant data model changes
- Security, performance, or migration complexity
- Ambiguity that benefits from technical decisions before coding
Minimal `design.md` skeleton:
```markdown
## Context
[Background, constraints, stakeholders]
## Goals / Non-Goals
- Goals: [...]
- Non-Goals: [...]
## Decisions
- Decision: [What and why]
- Alternatives considered: [Options + rationale]
## Risks / Trade-offs
- [Risk] → Mitigation
## Migration Plan
[Steps, rollback]
## Open Questions
- [...]
```
## Spec File Format
### Critical: Scenario Formatting
**CORRECT** (use #### headers):
```markdown
#### Scenario: User login success
- **WHEN** valid credentials provided
- **THEN** return JWT token
```
**WRONG** (don't use bullets or bold):
```markdown
- **Scenario: User login** ❌
**Scenario**: User login ❌
### Scenario: User login ❌
```
Every requirement MUST have at least one scenario.
### Requirement Wording
- Use SHALL/MUST for normative requirements (avoid should/may unless intentionally non-normative)
### Delta Operations
- `## ADDED Requirements` - New capabilities
- `## MODIFIED Requirements` - Changed behavior
- `## REMOVED Requirements` - Deprecated features
- `## RENAMED Requirements` - Name changes
Headers matched with `trim(header)` - whitespace ignored.
#### When to use ADDED vs MODIFIED
- ADDED: Introduces a new capability or sub-capability that can stand alone as a requirement. Prefer ADDED when the change is orthogonal (e.g., adding "Slash Command Configuration") rather than altering the semantics of an existing requirement.
- MODIFIED: Changes the behavior, scope, or acceptance criteria of an existing requirement. Always paste the full, updated requirement content (header + all scenarios). The archiver will replace the entire requirement with what you provide here; partial deltas will drop previous details.
- RENAMED: Use when only the name changes. If you also change behavior, use RENAMED (name) plus MODIFIED (content) referencing the new name.
Common pitfall: Using MODIFIED to add a new concern without including the previous text. This causes loss of detail at archive time. If you arent explicitly changing the existing requirement, add a new requirement under ADDED instead.
Authoring a MODIFIED requirement correctly:
1) Locate the existing requirement in `openspec/specs/<capability>/spec.md`.
2) Copy the entire requirement block (from `### Requirement: ...` through its scenarios).
3) Paste it under `## MODIFIED Requirements` and edit to reflect the new behavior.
4) Ensure the header text matches exactly (whitespace-insensitive) and keep at least one `#### Scenario:`.
Example for RENAMED:
```markdown
## RENAMED Requirements
- FROM: `### Requirement: Login`
- TO: `### Requirement: User Authentication`
```
## Troubleshooting
### Common Errors
**"Change must have at least one delta"**
- Check `changes/[name]/specs/` exists with .md files
- Verify files have operation prefixes (## ADDED Requirements)
**"Requirement must have at least one scenario"**
- Check scenarios use `#### Scenario:` format (4 hashtags)
- Don't use bullet points or bold for scenario headers
**Silent scenario parsing failures**
- Exact format required: `#### Scenario: Name`
- Debug with: `openspec show [change] --json --deltas-only`
### Validation Tips
```bash
# Always use strict mode for comprehensive checks
openspec validate [change] --strict
# Debug delta parsing
openspec show [change] --json | jq '.deltas'
# Check specific requirement
openspec show [spec] --json -r 1
```
## Happy Path Script
```bash
# 1) Explore current state
openspec spec list --long
openspec list
# Optional full-text search:
# rg -n "Requirement:|Scenario:" openspec/specs
# rg -n "^#|Requirement:" openspec/changes
# 2) Choose change id and scaffold
CHANGE=add-two-factor-auth
mkdir -p openspec/changes/$CHANGE/{specs/auth}
printf "## Why\n...\n\n## What Changes\n- ...\n\n## Impact\n- ...\n" > openspec/changes/$CHANGE/proposal.md
printf "## 1. Implementation\n- [ ] 1.1 ...\n" > openspec/changes/$CHANGE/tasks.md
# 3) Add deltas (example)
cat > openspec/changes/$CHANGE/specs/auth/spec.md << 'EOF'
## ADDED Requirements
### Requirement: Two-Factor Authentication
Users MUST provide a second factor during login.
#### Scenario: OTP required
- **WHEN** valid credentials are provided
- **THEN** an OTP challenge is required
EOF
# 4) Validate
openspec validate $CHANGE --strict
```
## Multi-Capability Example
```
openspec/changes/add-2fa-notify/
├── proposal.md
├── tasks.md
└── specs/
├── auth/
│ └── spec.md # ADDED: Two-Factor Authentication
└── notifications/
└── spec.md # ADDED: OTP email notification
```
auth/spec.md
```markdown
## ADDED Requirements
### Requirement: Two-Factor Authentication
...
```
notifications/spec.md
```markdown
## ADDED Requirements
### Requirement: OTP Email Notification
...
```
## Best Practices
### Simplicity First
- Default to <100 lines of new code
- Single-file implementations until proven insufficient
- Avoid frameworks without clear justification
- Choose boring, proven patterns
### Complexity Triggers
Only add complexity with:
- Performance data showing current solution too slow
- Concrete scale requirements (>1000 users, >100MB data)
- Multiple proven use cases requiring abstraction
### Clear References
- Use `file.ts:42` format for code locations
- Reference specs as `specs/auth/spec.md`
- Link related changes and PRs
### Capability Naming
- Use verb-noun: `user-auth`, `payment-capture`
- Single purpose per capability
- 10-minute understandability rule
- Split if description needs "AND"
### Change ID Naming
- Use kebab-case, short and descriptive: `add-two-factor-auth`
- Prefer verb-led prefixes: `add-`, `update-`, `remove-`, `refactor-`
- Ensure uniqueness; if taken, append `-2`, `-3`, etc.
## Tool Selection Guide
| Task | Tool | Why |
|------|------|-----|
| Find files by pattern | Glob | Fast pattern matching |
| Search code content | Grep | Optimized regex search |
| Read specific files | Read | Direct file access |
| Explore unknown scope | Task | Multi-step investigation |
## Error Recovery
### Change Conflicts
1. Run `openspec list` to see active changes
2. Check for overlapping specs
3. Coordinate with change owners
4. Consider combining proposals
### Validation Failures
1. Run with `--strict` flag
2. Check JSON output for details
3. Verify spec file format
4. Ensure scenarios properly formatted
### Missing Context
1. Read project.md first
2. Check related specs
3. Review recent archives
4. Ask for clarification
## Quick Reference
### Stage Indicators
- `changes/` - Proposed, not yet built
- `specs/` - Built and deployed
- `archive/` - Completed changes
### File Purposes
- `proposal.md` - Why and what
- `tasks.md` - Implementation steps
- `design.md` - Technical decisions
- `spec.md` - Requirements and behavior
### CLI Essentials
```bash
openspec list # What's in progress?
openspec show [item] # View details
openspec validate --strict # Is it correct?
openspec archive <change-id> [--yes|-y] # Mark complete (add --yes for automation)
```
Remember: Specs are truth. Changes are proposals. Keep them in sync.

View File

@@ -0,0 +1,77 @@
## Context
混剪功能需要将多种比例的素材统一输出为 9:16 竖屏视频720x1280
阿里云 ICE 支持视频裁剪和缩放,需要在 Timeline 中配置正确的参数。
## Goals / Non-Goals
**Goals:**
- 支持横屏 (16:9) 素材自动裁剪为竖屏 (9:16)
- 支持多种裁剪模式(居中、智能、填充)
- 保持视频质量,避免过度拉伸
**Non-Goals:**
- 不实现自定义裁剪区域选择
- 不实现实时预览
## Decisions
### 裁剪模式设计
| 模式 | 说明 | 适用场景 |
|------|------|----------|
| `center` | 居中裁剪,保持原始比例 | 主体在画面中央 |
| `smart` | 智能裁剪ICE AI 识别主体) | 人物/产品展示 |
| `fill` | 填充黑边,不裁剪 | 保留完整画面 |
### ICE 参数方案
**方案 A使用 CropX/CropY/CropW/CropH**
```json
{
"MediaURL": "xxx",
"CropX": 280,
"CropY": 0,
"CropW": 720,
"CropH": 1280
}
```
**方案 B使用 Effects + Crop**
```json
{
"Effects": [{
"Type": "Crop",
"X": 280,
"Y": 0,
"Width": 720,
"Height": 1280
}]
}
```
### 裁剪计算公式
对于 16:9 横屏素材 (1920x1080) 裁剪为 9:16
```
目标比例 = 9/16 = 0.5625
源比例 = 16/9 = 1.778
// 居中裁剪
cropHeight = sourceHeight = 1080
cropWidth = cropHeight * (9/16) = 607.5 ≈ 608
cropX = (sourceWidth - cropWidth) / 2 = (1920 - 608) / 2 = 656
cropY = 0
```
## Risks / Trade-offs
- **画面损失**:居中裁剪会丢失左右两侧内容
- **缩放失真**:填充模式会缩小画面
- **ICE 兼容性**:需确认 ICE 版本支持的参数
## Open Questions
1. ICE 是否支持智能主体识别裁剪?
2. 是否需要前端预览裁剪效果?
3. 默认裁剪模式选择哪种?

View File

@@ -0,0 +1,21 @@
# Change: ICE 增加 9:16 竖屏裁剪支持
## Why
当前混剪功能输出固定为 720x1280 (9:16) 尺寸,但输入素材可能是横屏 (16:9) 或其他比例。
需要支持自动裁剪/缩放,确保输出视频符合竖屏要求,避免黑边或变形。
## What Changes
- 新增视频裁剪模式配置(居中裁剪 / 智能裁剪 / 填充黑边)
- ICE Timeline 增加 CropMode 参数
- 后端支持不同比例素材的自动处理
- 前端可选裁剪模式(默认居中裁剪)
## Impact
- Affected specs: `mix-task`
- Affected code:
- `BatchProduceAlignment.java` - Timeline 构建逻辑
- `MixTaskSaveReqVO.java` - 新增 cropMode 参数
- `Mix.vue` - 可选裁剪模式

View File

@@ -0,0 +1,48 @@
## ADDED Requirements
### Requirement: 9:16 竖屏裁剪支持
混剪系统 SHALL 支持将不同比例的素材自动处理为 9:16 竖屏输出。
系统 SHALL 提供以下裁剪模式:
- `center`:居中裁剪,保持原始比例,裁剪超出部分
- `smart`:智能裁剪,识别主体位置进行裁剪(依赖 ICE 能力)
- `fill`:填充模式,缩放素材并填充黑边保留完整画面
系统 SHALL 默认使用 `center` 居中裁剪模式。
#### Scenario: 横屏素材居中裁剪
- **WHEN** 用户上传 16:9 横屏素材1920x1080
- **AND** 选择 `center` 裁剪模式
- **THEN** 系统自动计算裁剪区域(居中取 608x1080
- **AND** 输出 720x1280 竖屏视频
#### Scenario: 竖屏素材无需裁剪
- **WHEN** 用户上传 9:16 竖屏素材720x1280
- **THEN** 系统直接使用原素材
- **AND** 不进行裁剪处理
#### Scenario: 填充模式保留完整画面
- **WHEN** 用户上传 16:9 横屏素材
- **AND** 选择 `fill` 填充模式
- **THEN** 系统缩放素材至竖屏宽度
- **AND** 上下填充黑边
- **AND** 输出 720x1280 竖屏视频
### Requirement: 裁剪模式配置
混剪任务创建 API SHALL 接受可选的 `cropMode` 参数。
参数规格:
- 字段名:`cropMode`
- 类型String
- 可选值:`center` | `smart` | `fill`
- 默认值:`center`
#### Scenario: 指定裁剪模式
- **WHEN** 用户创建混剪任务时指定 `cropMode: "fill"`
- **THEN** 所有素材使用填充模式处理
#### Scenario: 使用默认裁剪模式
- **WHEN** 用户创建混剪任务未指定 `cropMode`
- **THEN** 系统使用默认的 `center` 居中裁剪模式

View File

@@ -0,0 +1,18 @@
## 1. 调研阶段
- [ ] 1.1 确认阿里云 ICE 支持的裁剪参数CropX/CropY/CropW/CropH 或 ScaleMode
- [ ] 1.2 测试横屏素材在 ICE 中的默认处理方式
## 2. 后端实现
- [ ] 2.1 MixTaskSaveReqVO 新增 cropMode 字段center/smart/fill
- [ ] 2.2 BatchProduceAlignment 实现裁剪计算逻辑
- [ ] 2.3 ICE Timeline 增加裁剪参数
- [ ] 2.4 单元测试
## 3. 前端实现
- [ ] 3.1 Mix.vue 新增裁剪模式选择(默认居中裁剪)
- [ ] 3.2 提交参数增加 cropMode
## 4. 测试验证
- [ ] 4.1 横屏素材混剪测试
- [ ] 4.2 竖屏素材混剪测试
- [ ] 4.3 混合比例素材测试

1026
openspec/mix-logic-spec.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,135 @@
# 混剪功能规格(简化版)
## 核心需求
- **输入**:用户选择素材 + 设定每个素材截取时长3-15s
- **输出**1-3个不同内容的混剪视频
- **总时长**15s-60s
- **差异化**:同顺序 + 同时长 + **随机截取起点**
## 多视频差异化算法
### 核心原理
**随机起点 + 容错机制**
- 每个视频使用**随机截取起点**,确保内容完全不同
- 支持**不同长度的素材**ICE自动容错处理
- 容错如果起点超出素材长度ICE自动从0开始截取
**随机种子**:使用 `素材ID×1000000 + 视频序号×10000 + URL哈希%1000` 确保可重现性
### 算法实现
**随机起点生成**
```java
// 1. 先获取视频实际时长
int actualDuration = getVideoDuration(videoUrl);
// 2. 生成随机种子
long randomSeed = (material.getFileId() * 1000000L) +
(videoIndex * 10000L) +
(material.getFileUrl().hashCode() % 1000);
Random random = new Random(randomSeed);
// 3. 根据实际时长计算起始范围
int maxStartOffset = Math.max(0, actualDuration - duration);
int startOffset = random.nextInt(maxStartOffset + 1);
int endOffset = startOffset + duration;
```
**获取视频时长方案**
1. **数据库字段**上传时预存duration字段推荐
2. **FFprobe工具**:命令行获取视频元数据
3. **ICE元数据API**调用ICE查询接口
4. **默认60秒**:保守值,兼容性最好
**容错机制**
- 根据实际时长计算最大起始偏移,避免超出素材长度
- 如果获取时长失败使用默认值60秒
- ICE自动处理边界情况
### ICE Timeline构建
每个素材片段包含参数:
- `MediaURL`:素材地址
- `In`随机截取起始点0到实际时长-duration之间
- `Out`:截取结束点 = `In + duration`
- `TimelineIn/TimelineOut`:时间轴位置(顺序拼接)
ICE自动处理超出素材长度的情况无需额外判断。
## API设计
### 请求格式
```http
POST /api/mix/create
{
"title": "",
"materials": [
{ "fileId": 123, "fileUrl": "https://xxx/v1.mp4", "duration": 5 },
{ "fileId": 456, "fileUrl": "https://xxx/v2.mp4", "duration": 8 },
{ "fileId": 789, "fileUrl": "https://xxx/v3.mp4", "duration": 5 }
],
"produceCount": 3
}
```
### 后端处理流程
1. 校验请求参数总时长15-60s
2. 循环生成produceCount个视频
- videoIndex = 0, 1, 2...
- 获取每个素材的实际时长(数据库/FFprobe/ICE API
- 生成随机起点基于素材ID×1000000 + videoIndex×10000 + URL哈希
- 根据实际时长计算起始范围,避免超出素材长度
- 构建Timeline传递随机In/Out参数给ICE
- 提交ICE任务
3. 保存任务并返回任务ID
## 校验规则
| 规则 | 前端 | 后端 |
|------|------|------|
| 总时长 15-60s | ✅ | ✅ |
| 单素材 3-15s | ✅ | ✅ |
| 至少选1个素材 | ✅ | ✅ |
| 生成数量 1-3 | ✅ | ✅ |
## 实现清单
### 已完成
- [x] 前端时长选择和实时计算
- [x] 后端VOMaterialItem实现
- [x] 后端DOmaterialsJson字段
- [x] 数据库迁移脚本
- [x] 后端Controller/api/mix/create
- [x] 后端Service多视频生成逻辑
- [x] ICE Timeline构建随机起点+实际时长+容错)
- [x] 批量任务提交和状态跟踪
### 测试验证
- [ ] 编译验证
- [ ] 端到端功能测试
- [ ] 多视频差异化验证
---
## 代码修改清单
### 核心修改
1. **BatchProduceAlignment.java**
- 新增方法:`produceSingleVideoWithOffset(materials, videoIndex, userId)`
- 新增方法:`getVideoDuration(videoUrl)` - 获取视频实际时长
- 核心逻辑:先获取实际时长,再生成随机起点
- 容错机制:根据实际时长计算范围,避免超出长度
2. **MixTaskServiceImpl.java**
- 循环生成produceCount个视频
- 每次传入不同的videoIndex确保随机起点不同
3. **数据库结构(可选改进)**
- 新增字段:`duration INTEGER COMMENT '视频时长(秒)'`
- 上传时预处理使用FFprobe获取时长并存储
*版本v3.0 - 简化版ICE自动容错*

31
openspec/project.md Normal file
View File

@@ -0,0 +1,31 @@
# Project Context
## Purpose
[Describe your project's purpose and goals]
## Tech Stack
- [List your primary technologies]
- [e.g., TypeScript, React, Node.js]
## Project Conventions
### Code Style
[Describe your code style preferences, formatting rules, and naming conventions]
### Architecture Patterns
[Document your architectural decisions and patterns]
### Testing Strategy
[Explain your testing approach and requirements]
### Git Workflow
[Describe your branching strategy and commit conventions]
## Domain Context
[Add domain-specific knowledge that AI assistants need to understand]
## Important Constraints
[List any technical, business, or regulatory constraints]
## External Dependencies
[Document key external services, APIs, or systems]

View File

@@ -43,8 +43,10 @@ public class AppTikUserFileController {
@Parameter(description = "文件分类video/generate/audio/mix/voice", required = true)
@RequestParam("fileCategory") String fileCategory,
@Parameter(description = "视频封面 base64可选data URI 格式)")
@RequestParam(value = "coverBase64", required = false) String coverBase64) {
return success(userFileService.uploadFile(file, fileCategory, coverBase64));
@RequestParam(value = "coverBase64", required = false) String coverBase64,
@Parameter(description = "视频时长(秒)")
@RequestParam(value = "duration", required = false) Integer duration) {
return success(userFileService.uploadFile(file, fileCategory, coverBase64, duration));
}
@GetMapping("/page")

View File

@@ -79,5 +79,9 @@ public class TikUserFileDO extends TenantBaseDO {
* 文件描述
*/
private String description;
/**
* 视频时长(秒)
*/
private Integer duration;
}

View File

@@ -20,9 +20,10 @@ public interface TikUserFileService {
* @param file 文件
* @param fileCategory 文件分类video/generate/audio/mix/voice
* @param coverBase64 视频封面 base64可选data URI 格式)
* @param duration 视频时长(秒,可选)
* @return 文件编号
*/
Long uploadFile(MultipartFile file, String fileCategory, String coverBase64);
Long uploadFile(MultipartFile file, String fileCategory, String coverBase64, Integer duration);
/**
* 分页查询文件列表

View File

@@ -73,7 +73,7 @@ public class TikUserFileServiceImpl implements TikUserFileService {
private FileConfigService fileConfigService;
@Override
public Long uploadFile(MultipartFile file, String fileCategory, String coverBase64) {
public Long uploadFile(MultipartFile file, String fileCategory, String coverBase64, Integer duration) {
Long userId = SecurityFrameworkUtils.getLoginUserId();
Long tenantId = TenantContextHolder.getTenantId();
@@ -151,7 +151,7 @@ public class TikUserFileServiceImpl implements TikUserFileService {
// ========== 第三阶段保存数据库在事务中如果失败则删除OSS文件 ==========
try {
return saveFileRecord(userId, file, fileCategory, fileUrl, filePath, coverBase64, baseDirectory, infraFileId);
return saveFileRecord(userId, file, fileCategory, fileUrl, filePath, coverBase64, baseDirectory, infraFileId, duration);
} catch (Exception e) {
// 数据库保存失败删除已上传的OSS文件
log.error("[uploadFile][保存数据库失败]", e);
@@ -165,7 +165,7 @@ public class TikUserFileServiceImpl implements TikUserFileService {
*/
@Transactional(rollbackFor = Exception.class)
public Long saveFileRecord(Long userId, MultipartFile file, String fileCategory,
String fileUrl, String filePath, String coverBase64, String baseDirectory, Long infraFileId) {
String fileUrl, String filePath, String coverBase64, String baseDirectory, Long infraFileId, Integer duration) {
// 7. 验证 infraFileId 不为空(必须在保存记录之前检查)
if (infraFileId == null) {
log.error("[saveFileRecord][infra_file.id 为空,无法保存文件记录,用户({})URL({})]", userId, fileUrl);
@@ -231,7 +231,8 @@ public class TikUserFileServiceImpl implements TikUserFileService {
.setFileUrl(fileUrl)
.setFilePath(filePath) // 保存完整的OSS路径由FileService生成
.setCoverUrl(coverUrl) // 设置封面URL如果有
.setCoverBase64(StrUtil.isNotBlank(coverBase64) ? coverBase64 : null); // 保存原始base64数据如果有
.setCoverBase64(StrUtil.isNotBlank(coverBase64) ? coverBase64 : null) // 保存原始base64数据如果有
.setDuration(duration); // 设置视频时长(如果有)
userFileMapper.insert(userFile);

View File

@@ -56,6 +56,9 @@ public class AppTikUserFileRespVO {
@Schema(description = "文件描述")
private String description;
@Schema(description = "视频时长(秒)")
private Integer duration;
@Schema(description = "创建时间", requiredMode = Schema.RequiredMode.REQUIRED)
private LocalDateTime createTime;

View File

@@ -24,5 +24,8 @@ public class AppTikUserFileUploadReqVO {
@Schema(description = "文件描述", example = "测试视频")
private String description;
@Schema(description = "视频时长(秒)", example = "60")
private Integer duration;
}

View File

@@ -12,6 +12,8 @@ import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Component;
import cn.iocoder.yudao.module.tik.mix.vo.MixTaskSaveReqVO;
import java.util.*;
// 成功视频
@@ -205,4 +207,193 @@ public class BatchProduceAlignment {
return jobIdWithUrl.split(" : ")[1];
}
/**
* 计算裁剪参数
*
* @param sourceWidth 源素材宽度
* @param sourceHeight 源素材高度
* @param cropMode 裁剪模式center(居中裁剪)、smart(智能裁剪)、fill(填充模式)
* @return 裁剪参数Map包含X、Y、Width、Height
*/
private Map<String, Integer> calculateCropParams(int sourceWidth, int sourceHeight, String cropMode) {
Map<String, Integer> cropParams = new HashMap<>();
double targetRatio = 9.0 / 16.0; // 9:16竖屏比例
if ("fill".equals(cropMode)) {
// 填充模式:不裁剪,保持原尺寸
cropParams.put("X", 0);
cropParams.put("Y", 0);
cropParams.put("Width", sourceWidth);
cropParams.put("Height", sourceHeight);
} else if ("smart".equals(cropMode)) {
// 智能裁剪功能暂未开放,自动降级为居中裁剪
log.info("[裁剪模式] smart模式暂未开放自动降级为center模式");
double cropHeight = sourceHeight;
double cropWidth = cropHeight * targetRatio;
int cropX = (int) Math.round((sourceWidth - cropWidth) / 2);
int cropY = 0;
cropParams.put("X", cropX);
cropParams.put("Y", cropY);
cropParams.put("Width", (int) Math.round(cropWidth));
cropParams.put("Height", (int) Math.round(cropHeight));
} else {
// center模式居中裁剪默认
double cropHeight = sourceHeight;
double cropWidth = cropHeight * targetRatio;
int cropX = (int) Math.round((sourceWidth - cropWidth) / 2);
int cropY = 0;
cropParams.put("X", cropX);
cropParams.put("Y", cropY);
cropParams.put("Width", (int) Math.round(cropWidth));
cropParams.put("Height", (int) Math.round(cropHeight));
}
log.debug("[裁剪计算] 源尺寸={}x{}, 模式={}, 裁剪参数={}", sourceWidth, sourceHeight, cropMode, cropParams);
return cropParams;
}
/**
* 生成单个视频(支持随机截取起始点)
*
* 多视频差异化原理:
* - 每个视频使用随机截取起点,确保内容完全不同
* - 支持不同长度的素材ICE自动容错处理
* - 容错机制如果起点超出素材长度从0开始截取
*
* @param materials 素材列表包含fileUrl和duration
* @param videoIndex 视频序号0开始用于生成随机种子
* @param userId 用户ID
* @param cropMode 裁剪模式center(居中裁剪)、smart(智能裁剪)、fill(填充模式)
* @return jobId : outputUrl 格式字符串
*/
public String produceSingleVideoWithOffset(List<MixTaskSaveReqVO.MaterialItem> materials,
int videoIndex, Long userId, String cropMode) throws Exception {
if (iceClient == null) {
initClient();
}
JSONArray videoClipArray = new JSONArray();
JSONArray audioClipArray = new JSONArray();
float timelinePos = 0;
for (int i = 0; i < materials.size(); i++) {
MixTaskSaveReqVO.MaterialItem material = materials.get(i);
String videoUrl = material.getFileUrl();
int duration = material.getDuration();
// 验证视频URL必须是阿里云OSS地址
if (!videoUrl.contains(".aliyuncs.com")) {
log.error("[ICE][视频URL不是阿里云OSS地址][视频{}: {}]", i + 1, videoUrl);
throw new IllegalArgumentException("视频URL必须是阿里云OSS地址当前URL: " + videoUrl);
}
// 计算随机截取起点
// 优先使用前端传入的素材实际时长无则从0开始截取兜底
Integer fileDuration = material.getFileDuration();
int startOffset = 0;
int endOffset = duration;
if (fileDuration != null && fileDuration > duration) {
// 有实际时长且足够:随机起点范围 0 到 (实际时长 - 截取时长)
long randomSeed = ((material.getFileId() != null ? material.getFileId() : i) * 1000000L) +
(videoIndex * 10000L) + (material.getFileUrl().hashCode() % 1000);
Random random = new Random(randomSeed);
int maxStartOffset = fileDuration - duration;
startOffset = random.nextInt(maxStartOffset + 1);
endOffset = startOffset + duration;
log.debug("[ICE][随机截取] fileId={}, fileDuration={}s, In={}, Out={}",
material.getFileId(), fileDuration, startOffset, endOffset);
} else {
// 无时长或时长不足从0开始截取兜底
log.debug("[ICE][兜底截取] fileId={}, fileDuration={}, In=0, Out={}",
material.getFileId(), fileDuration, duration);
}
log.debug("[ICE][添加视频片段][视频{}: {}, In={}, Out={}, TimelineIn={}, TimelineOut={}]",
videoIndex + 1, videoUrl, startOffset, endOffset, timelinePos, timelinePos + duration);
// 构建视频片段(带 In/Out 参数)
JSONObject videoClip = new JSONObject();
videoClip.put("MediaURL", videoUrl);
videoClip.put("In", startOffset);
videoClip.put("Out", endOffset);
videoClip.put("TimelineIn", timelinePos);
videoClip.put("TimelineOut", timelinePos + duration);
// 添加裁剪效果9:16竖屏输出
// 假设源素材为1920x108016:9可根据实际情况调整
int sourceWidth = 1920;
int sourceHeight = 1080;
if (cropMode != null && !"fill".equals(cropMode)) {
// 非填充模式需要裁剪
Map<String, Integer> cropParams = calculateCropParams(sourceWidth, sourceHeight, cropMode);
JSONArray effects = new JSONArray();
JSONObject cropEffect = new JSONObject();
cropEffect.put("Type", "Crop");
cropEffect.put("X", cropParams.get("X"));
cropEffect.put("Y", cropParams.get("Y"));
cropEffect.put("Width", cropParams.get("Width"));
cropEffect.put("Height", cropParams.get("Height"));
effects.add(cropEffect);
videoClip.put("Effects", effects);
log.debug("[裁剪效果] 视频{}应用裁剪,模式={}, 参数={}", i + 1, cropMode, cropParams);
}
videoClipArray.add(videoClip);
// 为每个视频片段添加静音的音频轨道
JSONObject audioClip = new JSONObject();
audioClip.put("MediaURL", videoUrl);
audioClip.put("In", startOffset);
audioClip.put("Out", endOffset);
audioClip.put("TimelineIn", timelinePos);
audioClip.put("TimelineOut", timelinePos + duration);
audioClip.put("Effects", new JSONArray() {{
add(new JSONObject() {{
put("Type", "Volume");
put("Gain", 0); // 静音
}});
}});
audioClipArray.add(audioClip);
timelinePos += duration;
}
// 构建时间线
String timeline = "{\"VideoTracks\":[{\"VideoTrackClips\":" + videoClipArray.toJSONString() +
"}],\"AudioTracks\":[{\"AudioTrackClips\":" + audioClipArray.toJSONString() + "}]}";
// 生成输出文件路径
String targetFileName = UUID.randomUUID().toString().replace("-", "");
String mixDirectory = ossInitService.getOssDirectoryByCategory(userId, "mix");
String dateDir = java.time.LocalDate.now().format(java.time.format.DateTimeFormatter.ofPattern("yyyyMMdd"));
String outputMediaPath = mixDirectory + "/" + dateDir + "/" + targetFileName + ".mp4";
String bucketEndpoint = "https://" + properties.getBucket() + ".oss-" + properties.getRegionId() + ".aliyuncs.com";
String outputMediaUrl = bucketEndpoint + "/" + outputMediaPath;
int width = 720;
int height = 1280;
int bitrate = 2000;
String outputMediaConfig = "{\"MediaURL\":\"" + outputMediaUrl + "\",\"Width\":" + width +
",\"Height\":" + height + ",\"Bitrate\":" + bitrate + "}";
SubmitMediaProducingJobRequest request = new SubmitMediaProducingJobRequest();
request.setTimeline(timeline);
request.setOutputMediaConfig(outputMediaConfig);
log.info("[ICE][提交任务][videoIndex={}, 素材数量={}, 总时长={}s]",
videoIndex, materials.size(), (int)timelinePos);
SubmitMediaProducingJobResponse response = iceClient.submitMediaProducingJob(request);
String jobId = response.getBody().getJobId();
log.info("[ICE][任务提交成功][videoIndex={}, jobId={}, outputUrl={}]", videoIndex, jobId, outputMediaUrl);
return jobId + " : " + outputMediaUrl;
}
}

View File

@@ -24,9 +24,9 @@ public class MixTaskConstants {
/**
* 定时任务配置
* 改为每2分钟检查一次,降低API调用频率
* 改为每30秒检查一次,提供更实时的进度更新
*/
public static final String CRON_CHECK_STATUS = "0 */2 * * * ?";
public static final String CRON_CHECK_STATUS = "*/30 * * * * ?";
/**
* 任务状态检查优化配置

View File

@@ -46,6 +46,12 @@ public class MixTaskDO extends TenantBaseDO {
@TableField("video_urls")
private String videoUrls;
/**
* 素材配置JSON包含fileId、fileUrl、duration
*/
@TableField("materials_json")
private String materialsJson;
/**
* 背景音乐URL列表(逗号分隔)
*/
@@ -162,4 +168,18 @@ public class MixTaskDO extends TenantBaseDO {
public void setOutputUrlList(List<String> outputUrls) {
this.outputUrls = outputUrls == null || outputUrls.isEmpty() ? null : String.join(",", outputUrls);
}
/**
* 获取素材配置JSON
*/
public String getMaterialsJson() {
return materialsJson;
}
/**
* 设置素材配置JSON
*/
public void setMaterialsJson(String materialsJson) {
this.materialsJson = materialsJson;
}
}

View File

@@ -2,6 +2,7 @@ package cn.iocoder.yudao.module.tik.mix.service;
import cn.iocoder.yudao.framework.common.pojo.PageResult;
import cn.iocoder.yudao.framework.common.util.object.BeanUtils;
import cn.iocoder.yudao.framework.common.util.json.JsonUtils;
import cn.hutool.core.util.StrUtil;
import cn.iocoder.yudao.module.infra.service.file.FileService;
import cn.iocoder.yudao.module.tik.mix.client.IceClient;
@@ -46,8 +47,11 @@ public class MixTaskServiceImpl implements MixTaskService {
@Override
@Transactional(rollbackFor = Exception.class)
public Long createMixTask(MixTaskSaveReqVO createReqVO, Long userId) {
log.info("[MixTask][创建任务] userId={}, title={}, videoCount={}, produceCount={}",
userId, createReqVO.getTitle(), createReqVO.getVideoUrls().size(), createReqVO.getProduceCount());
// 1. 校验时长
validateDuration(createReqVO);
log.info("[MixTask][创建任务] userId={}, title={}, materialCount={}, produceCount={}",
userId, createReqVO.getTitle(), createReqVO.getMaterials().size(), createReqVO.getProduceCount());
// 1. 创建初始任务对象
MixTaskDO task = MixTaskUtils.createInitialTask(createReqVO, userId);
@@ -168,10 +172,29 @@ public class MixTaskServiceImpl implements MixTaskService {
// 3. 重新提交到ICE
CompletableFuture.runAsync(() -> {
try {
// 手动构建请求对象纯画面模式无需text和bgMusicUrls
// 从 materialsJson 重建请求对象
List<MixTaskSaveReqVO.MaterialItem> materials = null;
if (StrUtil.isNotEmpty(existTask.getMaterialsJson())) {
materials = JsonUtils.parseArray(existTask.getMaterialsJson(), MixTaskSaveReqVO.MaterialItem.class);
} else if (existTask.getVideoUrlList() != null && !existTask.getVideoUrlList().isEmpty()) {
// 兼容旧版本:从 videoUrls 重建默认3秒时长
materials = existTask.getVideoUrlList().stream()
.map(url -> {
MixTaskSaveReqVO.MaterialItem item = new MixTaskSaveReqVO.MaterialItem();
item.setFileUrl(url);
item.setDuration(3); // 默认3秒
return item;
})
.collect(ArrayList::new, ArrayList::add, ArrayList::addAll);
}
if (materials == null || materials.isEmpty()) {
throw new IllegalArgumentException("无法重建素材列表");
}
MixTaskSaveReqVO saveReqVO = new MixTaskSaveReqVO();
saveReqVO.setTitle(existTask.getTitle());
saveReqVO.setVideoUrls(existTask.getVideoUrlList());
saveReqVO.setMaterials(materials);
saveReqVO.setProduceCount(existTask.getProduceCount());
submitToICE(id, saveReqVO, existTask.getUserId());
} catch (Exception e) {
@@ -353,24 +376,32 @@ public class MixTaskServiceImpl implements MixTaskService {
/**
* 提交任务到阿里云 ICE
*
* 多视频差异化逻辑:
* - 每个视频使用相同的素材顺序和时长
* - 但截取起始点不同videoIndex * duration
* - 生成内容不同的多个视频
*/
private void submitToICE(Long taskId, MixTaskSaveReqVO createReqVO, Long userId) {
try {
// 1. 转换为ICE需要的参数格式
String[] videoArray = createReqVO.getVideoUrls().toArray(new String[0]);
List<String> jobIdWithUrls = new ArrayList<>();
int produceCount = createReqVO.getProduceCount();
// 2. 调用ICE批量生成接口纯画面模式无需text和bgMusic
List<String> jobIdWithUrls = batchProduceAlignment.batchProduceAlignment(
createReqVO.getTitle(),
videoArray,
createReqVO.getProduceCount(),
userId
);
// 循环生成多个视频,每个视频使用不同的截取起始点
for (int videoIndex = 0; videoIndex < produceCount; videoIndex++) {
String jobIdWithUrl = batchProduceAlignment.produceSingleVideoWithOffset(
createReqVO.getMaterials(),
videoIndex,
userId,
createReqVO.getCropMode()
);
jobIdWithUrls.add(jobIdWithUrl);
}
// 3. 解析jobId和输出URL
// 解析jobId和输出URL
MixTaskUtils.JobIdUrlPair jobIdUrlPair = MixTaskUtils.parseJobIdsAndUrls(jobIdWithUrls);
// 4. 更新任务信息(包含状态和进度)
// 更新任务信息
updateTaskWithResults(taskId, jobIdUrlPair.getJobIds(), jobIdUrlPair.getOutputUrls(),
MixTaskConstants.STATUS_RUNNING, MixTaskConstants.PROGRESS_UPLOADED);
@@ -498,4 +529,36 @@ public class MixTaskServiceImpl implements MixTaskService {
}
});
}
/**
* 校验混剪任务时长
*/
private void validateDuration(MixTaskSaveReqVO req) {
// 1. 素材列表不能为空
if (req.getMaterials() == null || req.getMaterials().isEmpty()) {
throw new IllegalArgumentException("素材列表不能为空");
}
// 2. 计算总时长
int totalDuration = req.getMaterials().stream()
.mapToInt(MixTaskSaveReqVO.MaterialItem::getDuration)
.sum();
// 3. 总时长校验15s-30s
if (totalDuration < 15) {
throw new IllegalArgumentException("总时长不能小于15秒当前" + totalDuration + "");
}
if (totalDuration > 30) {
throw new IllegalArgumentException("总时长不能超过30秒当前" + totalDuration + "");
}
// 4. 单个素材时长校验3s-5s
for (MixTaskSaveReqVO.MaterialItem item : req.getMaterials()) {
if (item.getDuration() < 3 || item.getDuration() > 5) {
throw new IllegalArgumentException("单个素材时长需在3-5秒之间当前" + item.getDuration() + "");
}
}
log.info("[MixTask][时长校验通过] totalDuration={}s, materialCount={}", totalDuration, req.getMaterials().size());
}
}

View File

@@ -1,5 +1,6 @@
package cn.iocoder.yudao.module.tik.mix.util;
import cn.iocoder.yudao.framework.common.util.json.JsonUtils;
import cn.iocoder.yudao.module.tik.mix.constants.MixTaskConstants;
import cn.iocoder.yudao.module.tik.mix.dal.dataobject.MixTaskDO;
import cn.iocoder.yudao.module.tik.mix.vo.MixTaskSaveReqVO;
@@ -27,7 +28,19 @@ public class MixTaskUtils {
task.setUserId(userId);
task.setTitle(reqVO.getTitle());
task.setText(null); // 纯画面模式,不需要文案
task.setVideoUrlList(reqVO.getVideoUrls());
// 存储素材配置JSON
String materialsJson = JsonUtils.toJsonString(reqVO.getMaterials());
task.setMaterialsJson(materialsJson);
// 兼容旧版本:同时存储 videoUrls取第一个视频的URL用于兼容查询
if (reqVO.getMaterials() != null && !reqVO.getMaterials().isEmpty()) {
List<String> videoUrls = reqVO.getMaterials().stream()
.map(MixTaskSaveReqVO.MaterialItem::getFileUrl)
.collect(ArrayList::new, ArrayList::add, ArrayList::addAll);
task.setVideoUrlList(videoUrls);
}
task.setBgMusicUrlList(null); // 纯画面模式,不需要背景音乐
task.setProduceCount(reqVO.getProduceCount());
task.setStatus(MixTaskConstants.STATUS_PENDING);
@@ -127,4 +140,54 @@ public class MixTaskUtils {
return outputUrls;
}
}
/**
* 构建 ICE Timeline
*
* @param materials 素材列表
* @return ICE Timeline JSON 字符串
*/
public static String buildTimeline(List<MixTaskSaveReqVO.MaterialItem> materials) {
StringBuilder tracks = new StringBuilder();
float currentTime = 0;
for (int i = 0; i < materials.size(); i++) {
MixTaskSaveReqVO.MaterialItem material = materials.get(i);
if (i > 0) {
tracks.append(",");
}
tracks.append(String.format("""
{
"MediaURL": "%s",
"In": 0,
"Out": %d,
"TimelineIn": %.2f,
"TimelineOut": %.2f
}
""",
material.getFileUrl(),
material.getDuration(),
currentTime,
currentTime + material.getDuration()
));
currentTime += material.getDuration();
}
return buildFullTimeline(tracks.toString());
}
/**
* 构建完整的 ICE Timeline
*/
private static String buildFullTimeline(String tracks) {
return String.format("""
{
"VideoTracks": [{
"TrackItems": [%s]
}]
}
""", tracks);
}
}

View File

@@ -3,6 +3,8 @@ package cn.iocoder.yudao.module.tik.mix.vo;
import io.swagger.v3.oas.annotations.media.Schema;
import lombok.Data;
import jakarta.validation.constraints.Max;
import jakarta.validation.constraints.Min;
import jakarta.validation.constraints.NotBlank;
import jakarta.validation.constraints.NotEmpty;
import jakarta.validation.constraints.NotNull;
@@ -16,11 +18,36 @@ public class MixTaskSaveReqVO {
@NotBlank(message = "视频标题不能为空")
private String title;
@Schema(description = "视频素材URL列表", required = true)
@NotEmpty(message = "视频素材不能为空")
private List<String> videoUrls;
@Schema(description = "素材配置列表", required = true)
@NotEmpty(message = "素材列表不能为空")
private List<MaterialItem> materials;
@Schema(description = "生成数量", required = true, example = "1")
@NotNull(message = "生成数量不能为空")
private Integer produceCount = 1; // 默认生成1个
@Schema(description = "裁剪模式", example = "center")
private String cropMode = "center"; // 默认居中裁剪
@Schema(description = "素材项")
@Data
public static class MaterialItem {
@Schema(description = "素材文件ID", required = true, example = "12345")
@NotNull(message = "素材文件ID不能为空")
private Long fileId;
@Schema(description = "素材URL", required = true, example = "https://xxx.com/video1.mp4")
@NotBlank(message = "素材URL不能为空")
private String fileUrl;
@Schema(description = "截取时长(秒)", required = true, example = "3")
@Min(value = 3, message = "单个素材时长不能小于3秒")
@Max(value = 5, message = "单个素材时长不能超过5秒")
@NotNull(message = "素材时长不能为空")
private Integer duration;
@Schema(description = "素材实际时长(秒)", example = "60")
private Integer fileDuration;
}
}

View File

@@ -61,9 +61,9 @@ public class CosyVoiceProperties {
private Duration connectTimeout = Duration.ofSeconds(10);
/**
* 读取超时时间
* 读取超时时间改为3分钟提升语音合成成功率
*/
private Duration readTimeout = Duration.ofSeconds(60);
private Duration readTimeout = Duration.ofSeconds(180);
/**
* 是否启用

View File

@@ -56,9 +56,9 @@ public class LatentsyncProperties {
private Duration connectTimeout = Duration.ofSeconds(10);
/**
* 读取超时时间
* 读取超时时间改为3分钟提升语音合成成功率
*/
private Duration readTimeout = Duration.ofSeconds(60);
private Duration readTimeout = Duration.ofSeconds(180);
/**
* 是否打开调用

View File

@@ -0,0 +1,110 @@
package cn.iocoder.yudao.module.tik.media;
import cn.iocoder.yudao.module.tik.file.service.TikOssInitService;
import cn.iocoder.yudao.module.tik.mix.config.IceProperties;
import com.alibaba.fastjson.JSONArray;
import com.alibaba.fastjson.JSONObject;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.mockito.Mock;
import org.mockito.junit.jupiter.MockitoExtension;
import java.util.Map;
import static org.assertj.core.api.Assertions.assertThat;
/**
* BatchProduceAlignment 单元测试
*/
@ExtendWith(MockitoExtension.class)
class BatchProduceAlignmentTest {
@Mock
private IceProperties iceProperties;
@Mock
private TikOssInitService ossInitService;
private BatchProduceAlignment batchProduceAlignment;
@BeforeEach
void setUp() {
batchProduceAlignment = new BatchProduceAlignment(iceProperties, null, ossInitService);
}
@Test
void testCalculateCropParams_centerMode() {
// 16:9横屏素材 (1920x1080) -> 9:16竖屏裁剪
Map<String, Integer> cropParams = callCalculateCropParams(1920, 1080, "center");
// 验证居中裁剪参数
assertThat(cropParams.get("X")).isEqualTo(656); // (1920 - 608) / 2
assertThat(cropParams.get("Y")).isZero();
assertThat(cropParams.get("Width")).isEqualTo(608); // 1080 * (9/16)
assertThat(cropParams.get("Height")).isEqualTo(1080);
}
@Test
void testCalculateCropParams_smartMode() {
// smart模式目前实现与center相同
Map<String, Integer> cropParams = callCalculateCropParams(1920, 1080, "smart");
assertThat(cropParams.get("X")).isEqualTo(656);
assertThat(cropParams.get("Y")).isZero();
assertThat(cropParams.get("Width")).isEqualTo(608);
assertThat(cropParams.get("Height")).isEqualTo(1080);
}
@Test
void testCalculateCropParams_fillMode() {
// fill模式不裁剪保留原尺寸
Map<String, Integer> cropParams = callCalculateCropParams(1920, 1080, "fill");
assertThat(cropParams.get("X")).isZero();
assertThat(cropParams.get("Y")).isZero();
assertThat(cropParams.get("Width")).isEqualTo(1920);
assertThat(cropParams.get("Height")).isEqualTo(1080);
}
@Test
void testCalculateCropParams_differentAspectRatios() {
// 测试不同分辨率的横屏素材
Map<String, Integer> cropParams1 = callCalculateCropParams(1280, 720, "center");
assertThat(cropParams1.get("Width")).isEqualTo(405); // 720 * (9/16)
assertThat(cropParams1.get("Height")).isEqualTo(720);
// 测试正方形素材
Map<String, Integer> cropParams2 = callCalculateCropParams(1080, 1080, "center");
assertThat(cropParams2.get("Width")).isEqualTo(608); // 1080 * (9/16)
assertThat(cropParams2.get("Height")).isEqualTo(1080);
}
@Test
void testCalculateCropParams_defaultToCenter() {
// 默认应该是center模式
Map<String, Integer> cropParams = callCalculateCropParams(1920, 1080, null);
assertThat(cropParams.get("X")).isEqualTo(656);
assertThat(cropParams.get("Y")).isZero();
assertThat(cropParams.get("Width")).isEqualTo(608);
assertThat(cropParams.get("Height")).isEqualTo(1080);
}
/**
* 通过反射调用私有方法 calculateCropParams
*/
private Map<String, Integer> callCalculateCropParams(int sourceWidth, int sourceHeight, String cropMode) {
try {
java.lang.reflect.Method method = BatchProduceAlignment.class.getDeclaredMethod(
"calculateCropParams", int.class, int.class, String.class);
method.setAccessible(true);
@SuppressWarnings("unchecked")
Map<String, Integer> result = (Map<String, Integer>) method.invoke(
batchProduceAlignment, sourceWidth, sourceHeight, cropMode);
return result;
} catch (Exception e) {
throw new RuntimeException("Failed to invoke calculateCropParams via reflection", e);
}
}
}