Compare commits
18 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6920957152 | ||
|
|
604f8becc9 | ||
|
|
0af5bab75d | ||
|
|
0b8b823b2e | ||
|
|
d354a6fefa | ||
|
|
1c29fd5adc | ||
|
|
f97b885411 | ||
|
|
606f9b480b | ||
|
|
546beb3112 | ||
|
|
3c9138f115 | ||
|
|
cbbaaa95a3 | ||
|
|
7e953db6bd | ||
|
|
65da30f83d | ||
|
|
1965bbfee7 | ||
|
|
8ac1c99c63 | ||
|
|
082eb8579b | ||
|
|
0696651f04 | ||
|
|
f2aa075e65 |
@@ -10,3 +10,6 @@ trim_trailing_whitespace = true
|
||||
|
||||
[*.py]
|
||||
indent_size = 4
|
||||
|
||||
[*.ipynb]
|
||||
indent_size = 4
|
||||
|
||||
3
.gitignore
vendored
@@ -7,3 +7,6 @@ out
|
||||
__pycache__
|
||||
subenv
|
||||
caption-engine/build
|
||||
caption-engine/models
|
||||
output.wav
|
||||
.venv
|
||||
|
||||
5
.vscode/settings.json
vendored
@@ -7,5 +7,8 @@
|
||||
},
|
||||
"[json]": {
|
||||
"editor.defaultFormatter": "esbenp.prettier-vscode"
|
||||
}
|
||||
},
|
||||
"python.analysis.extraPaths": [
|
||||
"./caption-engine"
|
||||
]
|
||||
}
|
||||
|
||||
124
README.md
@@ -1,17 +1,30 @@
|
||||
<div align="center" >
|
||||
<img src="./resources/icon.png" width="100px" height="100px"/>
|
||||
<img src="./build/icon.png" width="100px" height="100px"/>
|
||||
<h1 align="center">auto-caption</h1>
|
||||
<p>Auto Caption 是一个跨平台的实时字幕显示软件。</p>
|
||||
<p>
|
||||
<a href="https://github.com/HiMeditator/auto-caption/releases">
|
||||
<img src="https://img.shields.io/badge/release-0.4.0-blue">
|
||||
</a>
|
||||
<a href="https://github.com/HiMeditator/auto-caption/issues">
|
||||
<img src="https://img.shields.io/github/issues/HiMeditator/auto-caption?color=orange">
|
||||
</a>
|
||||
<img src="https://img.shields.io/github/languages/top/HiMeditator/auto-caption?color=royalblue">
|
||||
<img src="https://img.shields.io/github/repo-size/HiMeditator/auto-caption?color=green">
|
||||
<img src="https://img.shields.io/github/stars/HiMeditator/auto-caption?style=social">
|
||||
</p>
|
||||
<p>
|
||||
| <b>简体中文</b>
|
||||
| <a href="./README_en.md">English</a>
|
||||
| <a href="./README_ja.md">日本語</a> |
|
||||
</p>
|
||||
<p><i>v0.2.0版本已经发布。预计将添加本地字幕引擎的v1.0.0版本正在开发中...</i></p>
|
||||
<p><i>包含 Vosk 本地字幕引擎的 v0.4.0 版本已经发布。<b>目前本地字幕引擎不含翻译</b>,本地翻译模块仍正在开发中...</i></p>
|
||||
</div>
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## 📥 下载
|
||||
|
||||
[GitHub Releases](https://github.com/HiMeditator/auto-caption/releases)
|
||||
@@ -24,31 +37,75 @@
|
||||
|
||||
[项目 API 文档](./docs/api-docs/electron-ipc.md)
|
||||
|
||||
### 基本使用
|
||||
## 📖 基本使用
|
||||
|
||||
目前仅提供了 Windows 平台的可安装版本。如果要使用默认的 Gummy 字幕引擎,首先需要获取阿里云百炼平台的 API KEY 并配置到环境变量中,这样才能正常使用该模型。
|
||||
目前提供了 Windows 和 macOS 平台的可安装版本。
|
||||
|
||||
**国际版的阿里云服务并没有提供 Gummy 模型,因此目前非中国用户无法使用默认字幕引擎。我正在开发新的本地字幕引擎,以确保所有用户都有默认字幕引擎可以使用。**
|
||||
> 国际版的阿里云服务并没有提供 Gummy 模型,因此目前非中国用户无法使用 Gummy 字幕引擎。
|
||||
|
||||
相关教程:
|
||||
如果要使用默认的 Gummy 字幕引擎(使用云端模型进行语音识别和翻译),首先需要获取阿里云百炼平台的 API KEY,然后将 API KEY 添加到软件设置中或者配置到环境变量中(仅 Windows 平台支持读取环境变量中的 API KEY),这样才能正常使用该模型。相关教程:
|
||||
|
||||
- [获取 API KEY](https://help.aliyun.com/zh/model-studio/get-api-key)
|
||||
- [将 API Key 配置到环境变量](https://help.aliyun.com/zh/model-studio/configure-api-key-through-environment-variables)。
|
||||
- [将 API Key 配置到环境变量](https://help.aliyun.com/zh/model-studio/configure-api-key-through-environment-variables)
|
||||
|
||||
> Vosk 模型的识别效果较差,请谨慎使用。
|
||||
|
||||
如果要使用 Vosk 本地字幕引擎,首先需要在 [Vosk Models](https://alphacephei.com/vosk/models) 页面下载你需要的模型,并将模型解压到本地,并将模型文件夹的路径添加到软件的设置中。目前 Vosk 字幕引擎还不支持翻译字幕内容。
|
||||
|
||||

|
||||
|
||||
|
||||
**如果你觉得上述字幕引擎不能满足你的需求,而且你会 Python,那么你可以考虑开发自己的字幕引擎。详细说明请参考[字幕引擎说明文档](./docs/engine-manual/zh.md)。**
|
||||
|
||||
如果你想了解字幕引擎的工作原理,或者你想开发自己的字幕引擎,请参考[字幕引擎说明文档](./docs/engine-manual/zh.md)。
|
||||
## ✨ 特性
|
||||
|
||||
- 多界面语言支持
|
||||
- 跨平台、多界面语言支持
|
||||
- 丰富的字幕样式设置
|
||||
- 灵活的字幕引擎选择
|
||||
- 多语言识别与翻译
|
||||
- 字幕记录展示与导出
|
||||
- 生成音频输出和麦克风输入的字幕
|
||||
- 生成音频输出或麦克风输入的字幕
|
||||
|
||||
说明:
|
||||
- Windows 平台支持生成音频输出和麦克风输入的字幕
|
||||
- Linux 平台目前仅支持生成麦克风输入的字幕
|
||||
- 目前还没有适配 macOS 平台
|
||||
- Windows 和 macOS 平台支持生成音频输出和麦克风输入的字幕,但是 **macOS 平台获取系统音频输出需要进行设置,详见[Auto Caption 用户手册](./docs/user-manual/zh.md)**
|
||||
- Linux 平台目前无法获取系统音频输出,仅支持生成麦克风输入的字幕
|
||||
|
||||
## ⚙️ 自带字幕引擎说明
|
||||
|
||||
目前软件自带 2 个字幕引擎,正在规划 1 个新的引擎。它们的详细信息如下。
|
||||
|
||||
### Gummy 字幕引擎(云端)
|
||||
|
||||
基于通义实验室[Gummy语音翻译大模型](https://help.aliyun.com/zh/model-studio/gummy-speech-recognition-translation/)进行开发,基于[阿里云百炼](https://bailian.console.aliyun.com)的 API 进行调用该云端模型。
|
||||
|
||||
**模型详细参数:**
|
||||
|
||||
- 音频采样率支持:16kHz及以上
|
||||
- 音频采样位数:16bit
|
||||
- 音频通道数支持:单通道
|
||||
- 可识别语言:中文、英文、日语、韩语、德语、法语、俄语、意大利语、西班牙语
|
||||
- 支持的翻译:
|
||||
- 中文 → 英文、日语、韩语
|
||||
- 英文 → 中文、日语、韩语
|
||||
- 日语、韩语、德语、法语、俄语、意大利语、西班牙语 → 中文或英文
|
||||
|
||||
**网络流量消耗:**
|
||||
|
||||
字幕引擎使用原生采样率(假设为 48kHz)进行采样,样本位深为 16bit,上传音频为为单通道,因此上传速率约为:
|
||||
|
||||
$$
|
||||
48000\ \text{samples/second} \times 2\ \text{bytes/sample} \times 1\ \text{channel} = 93.75\ \text{KB/s}
|
||||
$$
|
||||
|
||||
而且引擎只会获取到音频流的时候才会上传数据,因此实际上传速率可能更小。模型结果回传流量消耗较小,没有纳入考虑。
|
||||
|
||||
### Vosk 字幕引擎(本地)
|
||||
|
||||
基于 [vosk-api](https://github.com/alphacep/vosk-api) 开发。目前只支持生成音频对应的原文,不支持生成翻译内容。
|
||||
|
||||
### FunASR 字幕引擎(本地)
|
||||
|
||||
如果可行,将基于 [FunASR](https://github.com/modelscope/FunASR) 进行开发。还未进行调研和可行性验证。
|
||||
|
||||
## 🚀 项目运行
|
||||
|
||||
@@ -65,7 +122,10 @@ npm install
|
||||
首先进入 `caption-engine` 文件夹,执行如下指令创建虚拟环境:
|
||||
|
||||
```bash
|
||||
# in ./caption-engine folder
|
||||
python -m venv subenv
|
||||
# or
|
||||
python3 -m venv subenv
|
||||
```
|
||||
|
||||
然后激活虚拟环境:
|
||||
@@ -73,11 +133,13 @@ python -m venv subenv
|
||||
```bash
|
||||
# Windows
|
||||
subenv/Scripts/activate
|
||||
# Linux
|
||||
# Linux or macOS
|
||||
source subenv/bin/activate
|
||||
```
|
||||
|
||||
然后安装依赖(注意如果是 Linux 环境,需要注释掉 `requirements.txt` 中的 `PyAudioWPatch`,该模块仅适用于 Windows 环境):
|
||||
然后安装依赖(注意如果是 Linux 或 macOS 环境,需要注释掉 `requirements.txt` 中的 `PyAudioWPatch`,该模块仅适用于 Windows 环境)。
|
||||
|
||||
> 这一步可能会报错,一般是因为构建失败,需要根据报错信息安装对应的构建工具包。
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
@@ -86,7 +148,17 @@ pip install -r requirements.txt
|
||||
然后使用 `pyinstaller` 构建项目:
|
||||
|
||||
```bash
|
||||
pyinstaller --onefile main-gummy.py
|
||||
pyinstaller ./main-gummy.spec
|
||||
pyinstaller ./main-vosk.spec
|
||||
```
|
||||
|
||||
注意 `main-vosk.spec` 文件中 `vsok` 库的路径可能不正确,需要根据实际状况配置。
|
||||
|
||||
```
|
||||
# Windows
|
||||
vosk_path = str(Path('./subenv/Lib/site-packages/vosk').resolve())
|
||||
# Linux or macOS
|
||||
vosk_path = str(Path('./subenv/lib/python3.x/site-packages/vosk').resolve())
|
||||
```
|
||||
|
||||
此时项目构建完成,在进入 `caption-engine/dist` 文件夹可见对应的可执行文件。即可进行后续操作。
|
||||
@@ -98,13 +170,29 @@ npm run dev
|
||||
```
|
||||
### 构建项目
|
||||
|
||||
注意目前软件没有适配 macOS 平台,请使用 Windows 或 Linux 系统进行构建,更建议使用实现了完整功能的 Windows 平台。
|
||||
注意目前软件只在 Windows 和 macOS 平台上进行了构建和测试,无法保证软件在 Linux 平台下的正确性。
|
||||
|
||||
```bash
|
||||
# For windows
|
||||
npm run build:win
|
||||
# For macOS, not avaliable yet
|
||||
# For macOS
|
||||
npm run build:mac
|
||||
# For Linux
|
||||
npm run build:linux
|
||||
```
|
||||
|
||||
注意,根据不同的平台需要修改项目根目录下 `electron-builder.yml` 文件中的配置内容:
|
||||
|
||||
```yml
|
||||
extraResources:
|
||||
# For Windows
|
||||
- from: ./caption-engine/dist/main-gummy.exe
|
||||
to: ./caption-engine/main-gummy.exe
|
||||
- from: ./caption-engine/dist/main-vosk.exe
|
||||
to: ./caption-engine/main-vosk.exe
|
||||
# For macOS and Linux
|
||||
# - from: ./caption-engine/dist/main-gummy
|
||||
# to: ./caption-engine/main-gummy
|
||||
# - from: ./caption-engine/dist/main-vosk
|
||||
# to: ./caption-engine/main-vosk
|
||||
```
|
||||
|
||||
155
README_en.md
@@ -1,55 +1,112 @@
|
||||
<div align="center" >
|
||||
<img src="./resources/icon.png" width="100px" height="100px"/>
|
||||
<img src="./build/icon.png" width="100px" height="100px"/>
|
||||
<h1 align="center">auto-caption</h1>
|
||||
<p>Auto Caption is a cross-platform real-time caption display software.</p>
|
||||
<p>
|
||||
| <a href="./README.md">Chinese</a>
|
||||
| <b>English</b>
|
||||
| <a href="./README_ja.md">Japanese</a> |
|
||||
<a href="https://github.com/HiMeditator/auto-caption/releases">
|
||||
<img src="https://img.shields.io/badge/release-0.4.0-blue">
|
||||
</a>
|
||||
<a href="https://github.com/HiMeditator/auto-caption/issues">
|
||||
<img src="https://img.shields.io/github/issues/HiMeditator/auto-caption?color=orange">
|
||||
</a>
|
||||
<img src="https://img.shields.io/github/languages/top/HiMeditator/auto-caption?color=royalblue">
|
||||
<img src="https://img.shields.io/github/repo-size/HiMeditator/auto-caption?color=green">
|
||||
<img src="https://img.shields.io/github/stars/HiMeditator/auto-caption?style=social">
|
||||
</p>
|
||||
<p><i>Version v0.2.0 has been released. Version v1.0.0, which is expected to add a local caption engine, is under development...</i></p>
|
||||
<p>
|
||||
| <a href="./README.md">简体中文</a>
|
||||
| <b>English</b>
|
||||
| <a href="./README_ja.md">日本語</a> |
|
||||
</p>
|
||||
<p><i>The v0.4.0 version with Vosk local caption engine has been released. <b>Currently the local caption engine does not include translation</b>, the local translation module is still under development...</i></p>
|
||||
</div>
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## 📥 Download
|
||||
|
||||
[GitHub Releases](https://github.com/HiMeditator/auto-caption/releases)
|
||||
|
||||
## 📚 Related Documentation
|
||||
## 📚 Documentation
|
||||
|
||||
[Auto Caption User Manual](./docs/user-manual/en.md)
|
||||
|
||||
[Caption Engine Explanation Document](./docs/engine-manual/en.md)
|
||||
[Caption Engine Documentation](./docs/engine-manual/en.md)
|
||||
|
||||
[Project API Documentation (Chinese)](./docs/api-docs/electron-ipc.md)
|
||||
|
||||
### Basic Usage
|
||||
## 📖 Basic Usage
|
||||
|
||||
Currently, only an installable version for the Windows platform is provided. If you want to use the default Gummy caption engine, you first need to obtain an API KEY from the Alibaba Cloud Model Studio and configure it in the environment variables. This is necessary to use the model properly.
|
||||
Currently, installable versions are available for Windows and macOS platforms.
|
||||
|
||||
**The international version of Alibaba Cloud does not provide the Gummy model, so non-Chinese users currently cannot use the default caption engine. I am trying to develop a new local caption engine to ensure that all users have access to a default caption engine.**
|
||||
> The international version of Alibaba Cloud services does not provide the Gummy model, so non-Chinese users currently cannot use the Gummy caption engine.
|
||||
|
||||
Relevant tutorials:
|
||||
- [Obtain API KEY (Chinese)](https://help.aliyun.com/zh/model-studio/get-api-key)
|
||||
- [Configure API Key in Environment Variables (Chinese)](https://help.aliyun.com/zh/model-studio/configure-api-key-through-environment-variables).
|
||||
To use the default Gummy caption engine (which uses cloud-based models for speech recognition and translation), you first need to obtain an API KEY from the Alibaba Cloud Bailian platform. Then add the API KEY to the software settings or configure it in environment variables (only Windows platform supports reading API KEY from environment variables) to properly use this model. Related tutorials:
|
||||
|
||||
- [Obtaining API KEY (Chinese)](https://help.aliyun.com/zh/model-studio/get-api-key)
|
||||
- [Configuring API Key through Environment Variables (Chinese)](https://help.aliyun.com/zh/model-studio/configure-api-key-through-environment-variables)
|
||||
|
||||
> The recognition performance of Vosk models is suboptimal, please use with caution.
|
||||
|
||||
To use the Vosk local caption engine, first download your required model from [Vosk Models](https://alphacephei.com/vosk/models) page, extract the model locally, and add the model folder path to the software settings. Currently, the Vosk caption engine does not support translated captions.
|
||||
|
||||

|
||||
|
||||
**If you find the above caption engines don't meet your needs and you know Python, you may consider developing your own caption engine. For detailed instructions, please refer to the [Caption Engine Documentation](./docs/engine-manual/en.md).**
|
||||
|
||||
If you want to understand how the caption engine works or if you want to develop your own caption engine, please refer to the [Caption Engine Explanation Document](./docs/engine-manual/en.md).
|
||||
## ✨ Features
|
||||
|
||||
- Multi-language interface support
|
||||
- Cross-platform, multi-language UI support
|
||||
- Rich caption style settings
|
||||
- Flexible caption engine selection
|
||||
- Multi-language recognition and translation
|
||||
- Caption record display and export
|
||||
- Generate captions for audio output and microphone input
|
||||
- Caption recording display and export
|
||||
- Generate captions for audio output or microphone input
|
||||
|
||||
Notes:
|
||||
- The Windows platform supports generating captions for both audio output and microphone input.
|
||||
- The Linux platform currently only supports generating captions for microphone input.
|
||||
- The macOS platform is not yet supported.
|
||||
- Windows and macOS platforms support generating captions for both audio output and microphone input, but **macOS requires additional setup to capture system audio output. See [Auto Caption User Manual](./docs/user-manual/en.md) for details.**
|
||||
- Linux platform currently cannot capture system audio output, only supports generating subtitles for microphone input.
|
||||
|
||||
## 🚀 Project Execution
|
||||
## ⚙️ Built-in Subtitle Engines
|
||||
|
||||
Currently, the software comes with 2 subtitle engines, with 1 new engine planned. Details are as follows.
|
||||
|
||||
### Gummy Subtitle Engine (Cloud)
|
||||
|
||||
Developed based on Tongyi Lab's [Gummy Speech Translation Model](https://help.aliyun.com/zh/model-studio/gummy-speech-recognition-translation/), using [Alibaba Cloud Bailian](https://bailian.console.aliyun.com) API to call this cloud model.
|
||||
|
||||
**Model Parameters:**
|
||||
|
||||
- Supported audio sample rate: 16kHz and above
|
||||
- Audio sample depth: 16bit
|
||||
- Supported audio channels: Mono
|
||||
- Recognizable languages: Chinese, English, Japanese, Korean, German, French, Russian, Italian, Spanish
|
||||
- Supported translations:
|
||||
- Chinese → English, Japanese, Korean
|
||||
- English → Chinese, Japanese, Korean
|
||||
- Japanese, Korean, German, French, Russian, Italian, Spanish → Chinese or English
|
||||
|
||||
**Network Traffic Consumption:**
|
||||
|
||||
The subtitle engine uses native sample rate (assumed to be 48kHz) for sampling, with 16bit sample depth and mono channel, so the upload rate is approximately:
|
||||
|
||||
$$
|
||||
48000\ \text{samples/second} \times 2\ \text{bytes/sample} \times 1\ \text{channel} = 93.75\ \text{KB/s}
|
||||
$$
|
||||
|
||||
The engine only uploads data when receiving audio streams, so the actual upload rate may be lower. The return traffic consumption of model results is small and not considered here.
|
||||
|
||||
### Vosk Subtitle Engine (Local)
|
||||
|
||||
Developed based on [vosk-api](https://github.com/alphacep/vosk-api). Currently only supports generating original text from audio, does not support translation content.
|
||||
|
||||
### FunASR Subtitle Engine (Local)
|
||||
|
||||
If feasible, will be developed based on [FunASR](https://github.com/modelscope/FunASR). Not yet researched or verified for feasibility.
|
||||
|
||||
## 🚀 Project Setup
|
||||
|
||||

|
||||
|
||||
@@ -59,12 +116,15 @@ Notes:
|
||||
npm install
|
||||
```
|
||||
|
||||
### Build Caption Engine
|
||||
### Build Subtitle Engine
|
||||
|
||||
First, navigate to the `caption-engine` folder and execute the following command to create a virtual environment:
|
||||
First enter the `caption-engine` folder and execute the following commands to create a virtual environment:
|
||||
|
||||
```bash
|
||||
# in ./caption-engine folder
|
||||
python -m venv subenv
|
||||
# or
|
||||
python3 -m venv subenv
|
||||
```
|
||||
|
||||
Then activate the virtual environment:
|
||||
@@ -72,38 +132,67 @@ Then activate the virtual environment:
|
||||
```bash
|
||||
# Windows
|
||||
subenv/Scripts/activate
|
||||
# Linux
|
||||
# Linux or macOS
|
||||
source subenv/bin/activate
|
||||
```
|
||||
|
||||
Next, install the dependencies (note that if you are in a Linux environment, you should comment out `PyAudioWPatch` in `requirements.txt`, as this module is only applicable to the Windows environment):
|
||||
Then install dependencies (note: for Linux or macOS environments, you need to comment out `PyAudioWPatch` in `requirements.txt`, as this module is only for Windows environments).
|
||||
|
||||
> This step may report errors, usually due to build failures. You need to install corresponding build tools based on the error messages.
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
Then build the project using `pyinstaller`:
|
||||
Then use `pyinstaller` to build the project:
|
||||
|
||||
```bash
|
||||
pyinstaller --onefile main-gummy.py
|
||||
pyinstaller ./main-gummy.spec
|
||||
pyinstaller ./main-vosk.spec
|
||||
```
|
||||
|
||||
At this point, the project is built. You can find the executable file in the `caption-engine/dist` folder and proceed with further operations.
|
||||
Note that the path to the `vosk` library in `main-vosk.spec` might be incorrect and needs to be configured according to the actual situation.
|
||||
|
||||
### Run the Project
|
||||
```
|
||||
# Windows
|
||||
vosk_path = str(Path('./subenv/Lib/site-packages/vosk').resolve())
|
||||
# Linux or macOS
|
||||
vosk_path = str(Path('./subenv/lib/python3.x/site-packages/vosk').resolve())
|
||||
```
|
||||
|
||||
After the build completes, you can find the executable file in the `caption-engine/dist` folder. Then proceed with subsequent operations.
|
||||
|
||||
### Run Project
|
||||
|
||||
```bash
|
||||
npm run dev
|
||||
```
|
||||
### Build the Project
|
||||
|
||||
Note that the software is currently not adapted for the macOS platform. Please use Windows or Linux systems for building, with Windows being more recommended due to its full functionality.
|
||||
### Build Project
|
||||
|
||||
Note: Currently the software has only been built and tested on Windows and macOS platforms. Correct operation on Linux platform is not guaranteed.
|
||||
|
||||
```bash
|
||||
# For Windows
|
||||
# For windows
|
||||
npm run build:win
|
||||
# For macOS, not avaliable yet
|
||||
# For macOS
|
||||
npm run build:mac
|
||||
# For Linux
|
||||
npm run build:linux
|
||||
```
|
||||
|
||||
Note: You need to modify the configuration content in the `electron-builder.yml` file in the project root directory according to different platforms:
|
||||
|
||||
```yml
|
||||
extraResources:
|
||||
# For Windows
|
||||
- from: ./caption-engine/dist/main-gummy.exe
|
||||
to: ./caption-engine/main-gummy.exe
|
||||
- from: ./caption-engine/dist/main-vosk.exe
|
||||
to: ./caption-engine/main-vosk.exe
|
||||
# For macOS and Linux
|
||||
# - from: ./caption-engine/dist/main-gummy
|
||||
# to: ./caption-engine/main-gummy
|
||||
# - from: ./caption-engine/dist/main-vosk
|
||||
# to: ./caption-engine/main-vosk
|
||||
```
|
||||
|
||||
157
README_ja.md
@@ -1,17 +1,30 @@
|
||||
<div align="center" >
|
||||
<img src="./resources/icon.png" width="100px" height="100px"/>
|
||||
<img src="./build/icon.png" width="100px" height="100px"/>
|
||||
<h1 align="center">auto-caption</h1>
|
||||
<p>Auto Caption はクロスプラットフォームのリアルタイム字幕表示ソフトウェアです。</p>
|
||||
<p>
|
||||
| <a href="./README.md">簡体中文</a>
|
||||
| <a href="./README_en.md">英語</a>
|
||||
<a href="https://github.com/HiMeditator/auto-caption/releases">
|
||||
<img src="https://img.shields.io/badge/release-0.4.0-blue">
|
||||
</a>
|
||||
<a href="https://github.com/HiMeditator/auto-caption/issues">
|
||||
<img src="https://img.shields.io/github/issues/HiMeditator/auto-caption?color=orange">
|
||||
</a>
|
||||
<img src="https://img.shields.io/github/languages/top/HiMeditator/auto-caption?color=royalblue">
|
||||
<img src="https://img.shields.io/github/repo-size/HiMeditator/auto-caption?color=green">
|
||||
<img src="https://img.shields.io/github/stars/HiMeditator/auto-caption?style=social">
|
||||
</p>
|
||||
<p>
|
||||
| <a href="./README.md">简体中文</a>
|
||||
| <a href="./README_en.md">English</a>
|
||||
| <b>日本語</b> |
|
||||
</p>
|
||||
<p><i>v0.2.0 バージョンがリリースされました。ローカル字幕エンジンを追加予定の v1.0.0 バージョンが開発中...</i></p>
|
||||
<p><i>Voskローカル字幕エンジンを含む v0.4.0 バージョンがリリースされました。<b>現在、ローカル字幕エンジンには翻訳機能が含まれておりません</b>。ローカル翻訳モジュールは現在も開発中です...</i></p>
|
||||
</div>
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## 📥 ダウンロード
|
||||
|
||||
[GitHub Releases](https://github.com/HiMeditator/auto-caption/releases)
|
||||
@@ -20,36 +33,80 @@
|
||||
|
||||
[Auto Caption ユーザーマニュアル](./docs/user-manual/ja.md)
|
||||
|
||||
[字幕エンジン説明文書](./docs/engine-manual/ja.md)
|
||||
[字幕エンジン説明ドキュメント](./docs/engine-manual/ja.md)
|
||||
|
||||
[プロジェクト API ドキュメント(中国語)](./docs/api-docs/electron-ipc.md)
|
||||
|
||||
### 基本的な使用方法
|
||||
## 📖 基本使い方
|
||||
|
||||
現在、Windows プラットフォーム向けのインストール可能なバージョンのみ提供されています。デフォルトの Gummy 字幕エンジンを使用する場合、まず Alibaba Cloud 百煉プラットフォームの API キーを取得し、環境変数に設定する必要があります。これによりモデルが正常に動作します。
|
||||
現在、Windows と macOS プラットフォーム向けのインストール可能なバージョンを提供しています。
|
||||
|
||||
**アリババクラウドの国際版には Gummy モデルが提供されていないため、中国以外のユーザーは現在、デフォルトの字幕エンジンを使用できません。すべてのユーザーが利用できるように、新しいローカルの字幕エンジンを開発中です。**
|
||||
> 阿里雲の国際版サービスでは Gummy モデルを提供していないため、現在中国以外のユーザーは Gummy 字幕エンジンを使用できません。
|
||||
|
||||
関連チュートリアル:
|
||||
- [API キーの取得(中国語)](https://help.aliyun.com/zh/model-studio/get-api-key)
|
||||
- [環境変数への API キーの設定(中国語)](https://help.aliyun.com/zh/model-studio/configure-api-key-through-environment-variables)。
|
||||
デフォルトの Gummy 字幕エンジン(クラウドベースのモデルを使用した音声認識と翻訳)を使用するには、まず阿里雲百煉プラットフォームから API KEY を取得する必要があります。その後、API KEY をソフトウェア設定に追加するか、環境変数に設定します(Windows プラットフォームのみ環境変数からの API KEY 読み取りをサポート)。関連チュートリアル:
|
||||
|
||||
- [API KEY の取得(中国語)](https://help.aliyun.com/zh/model-studio/get-api-key)
|
||||
- [環境変数を通じて API Key を設定(中国語)](https://help.aliyun.com/zh/model-studio/configure-api-key-through-environment-variables)
|
||||
|
||||
> Vosk モデルの認識精度は低いため、注意してご使用ください。
|
||||
|
||||
Vosk ローカル字幕エンジンを使用するには、まず [Vosk Models](https://alphacephei.com/vosk/models) ページから必要なモデルをダウンロードし、ローカルに解凍した後、モデルフォルダのパスをソフトウェア設定に追加してください。現在、Vosk 字幕エンジンは字幕の翻訳をサポートしていません。
|
||||
|
||||

|
||||
|
||||
**上記の字幕エンジンがご要望を満たさず、かつ Python の知識をお持ちの場合、独自の字幕エンジンを開発することも可能です。詳細な説明は[字幕エンジン説明書](./docs/engine-manual/ja.md)をご参照ください。**
|
||||
|
||||
字幕エンジンの仕組みを理解したい場合、または独自の字幕エンジンを開発したい場合は、[字幕エンジン説明文書](./docs/engine-manual/ja.md)を参照してください。
|
||||
## ✨ 特徴
|
||||
|
||||
- 複数言語のインターフェースサポート
|
||||
- クロスプラットフォーム、多言語 UI サポート
|
||||
- 豊富な字幕スタイル設定
|
||||
- 柔軟な字幕エンジン選択
|
||||
- 複数言語の認識と翻訳
|
||||
- 多言語認識と翻訳
|
||||
- 字幕記録の表示とエクスポート
|
||||
- オーディオ出力とマイク入力の字幕生成
|
||||
- オーディオ出力またはマイク入力からの字幕生成
|
||||
|
||||
注意事項:
|
||||
- Windows プラットフォームでは、オーディオ出力とマイク入力の両方の字幕生成がサポートされています。
|
||||
- Linux プラットフォームでは、現在マイク入力の字幕生成のみがサポートされています。
|
||||
- 現在、macOS プラットフォームには対応していません。
|
||||
注記:
|
||||
- Windows と macOS プラットフォームはオーディオ出力とマイク入力の両方からの字幕生成をサポートしていますが、**macOS プラットフォームでシステムオーディオ出力を取得するには設定が必要です。詳細は[Auto Caption ユーザーマニュアル](./docs/user-manual/ja.md)をご覧ください。**
|
||||
- Linux プラットフォームは現在システムオーディオ出力を取得できず、マイク入力からの字幕生成のみをサポートしています。
|
||||
|
||||
## 🚀 プロジェクトの実行
|
||||
## ⚙️ 字幕エンジン説明
|
||||
|
||||
現在ソフトウェアには2つの字幕エンジンが組み込まれており、1つの新しいエンジンを計画中です。詳細は以下の通りです。
|
||||
|
||||
### Gummy 字幕エンジン(クラウド)
|
||||
|
||||
Tongyi Lab の [Gummy 音声翻訳大規模モデル](https://help.aliyun.com/zh/model-studio/gummy-speech-recognition-translation/)をベースに開発され、[Alibaba Cloud Bailian](https://bailian.console.aliyun.com) の APIを使用してこのクラウドモデルを呼び出します。
|
||||
|
||||
**モデル詳細パラメータ:**
|
||||
|
||||
- サポートするオーディオサンプルレート:16kHz以上
|
||||
- オーディオサンプルビット深度:16bit
|
||||
- サポートするオーディオチャンネル:モノラル
|
||||
- 認識可能な言語:中国語、英語、日本語、韓国語、ドイツ語、フランス語、ロシア語、イタリア語、スペイン語
|
||||
- サポートする翻訳:
|
||||
- 中国語 → 英語、日本語、韓国語
|
||||
- 英語 → 中国語、日本語、韓国語
|
||||
- 日本語、韓国語、ドイツ語、フランス語、ロシア語、イタリア語、スペイン語 → 中国語または英語
|
||||
|
||||
**ネットワークトラフィック消費量:**
|
||||
|
||||
字幕エンジンはネイティブサンプルレート(48kHz と仮定)でサンプリングを行い、サンプルビット深度は 16bit、アップロードオーディオはモノラルチャンネルのため、アップロードレートは約:
|
||||
|
||||
$$
|
||||
48000\ \text{samples/second} \times 2\ \text{bytes/sample} \times 1\ \text{channel} = 93.75\ \text{KB/s}
|
||||
$$
|
||||
|
||||
また、エンジンはオーディオストームを取得したときのみデータをアップロードするため、実際のアップロードレートはさらに小さくなる可能性があります。モデル結果の返信トラフィック消費量は小さく、ここでは考慮していません。
|
||||
|
||||
### Vosk字幕エンジン(ローカル)
|
||||
|
||||
[vosk-api](https://github.com/alphacep/vosk-api) をベースに開発されています。現在は音声に対応する原文の生成のみをサポートしており、翻訳コンテンツはサポートしていません。
|
||||
|
||||
### FunASR字幕エンジン(ローカル)
|
||||
|
||||
可能であれば、[FunASR](https://github.com/modelscope/FunASR) をベースに開発予定です。まだ調査と実現可能性の検証を行っていません。
|
||||
|
||||
## 🚀 プロジェクト実行
|
||||
|
||||

|
||||
|
||||
@@ -59,51 +116,83 @@
|
||||
npm install
|
||||
```
|
||||
|
||||
### 字幕エンジンのビルド
|
||||
### 字幕エンジンの構築
|
||||
|
||||
まず、`caption-engine` フォルダに移動し、以下のコマンドを実行して仮想環境を作成します:
|
||||
まず `caption-engine` フォルダに入り、以下のコマンドを実行して仮想環境を作成します:
|
||||
|
||||
```bash
|
||||
# ./caption-engine フォルダ内
|
||||
python -m venv subenv
|
||||
# または
|
||||
python3 -m venv subenv
|
||||
```
|
||||
|
||||
次に、仮想環境をアクティブ化します:
|
||||
次に仮想環境をアクティブにします:
|
||||
|
||||
```bash
|
||||
# Windows
|
||||
subenv/Scripts/activate
|
||||
# Linux
|
||||
# Linux または macOS
|
||||
source subenv/bin/activate
|
||||
```
|
||||
|
||||
次に、依存関係をインストールします(Linux 環境の場合、`requirements.txt` の `PyAudioWPatch` をコメントアウトする必要があります。このモジュールは Windows 環境でのみ適用されます):
|
||||
その後、依存関係をインストールします(Linux または macOS 環境の場合、`requirements.txt` 内の `PyAudioWPatch` をコメントアウトする必要があります。このモジュールは Windows 環境専用です)。
|
||||
|
||||
> このステップでエラーが発生する場合があります。一般的にはビルド失敗が原因で、エラーメッセージに基づいて対応するビルドツールパッケージをインストールする必要があります。
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
次に、`pyinstaller` を使用してプロジェクトをビルドします:
|
||||
その後、`pyinstaller` を使用してプロジェクトをビルドします:
|
||||
|
||||
```bash
|
||||
pyinstaller --onefile main-gummy.py
|
||||
pyinstaller ./main-gummy.spec
|
||||
pyinstaller ./main-vosk.spec
|
||||
```
|
||||
|
||||
この時点でプロジェクトのビルドが完了し、`caption-engine/dist` フォルダで対応する実行ファイルを見つけることができます。その後、必要な操作を行ってください。
|
||||
`main-vosk.spec` ファイル内の `vosk` ライブラリのパスが正しくない可能性があるため、実際の状況に応じて設定する必要があります。
|
||||
|
||||
### プロジェクトの実行
|
||||
```
|
||||
# Windows
|
||||
vosk_path = str(Path('./subenv/Lib/site-packages/vosk').resolve())
|
||||
# LinuxまたはmacOS
|
||||
vosk_path = str(Path('./subenv/lib/python3.x/site-packages/vosk').resolve())
|
||||
```
|
||||
|
||||
これでプロジェクトのビルドが完了し、`caption-engine/dist` フォルダ内に対応する実行可能ファイルが確認できます。その後、次の操作に進むことができます。
|
||||
|
||||
### プロジェクト実行
|
||||
|
||||
```bash
|
||||
npm run dev
|
||||
```
|
||||
### プロジェクトのビルド
|
||||
|
||||
現在、ソフトウェアは macOS プラットフォームに対応していません。Windows または Linux システムを使用してビルドしてください。完全な機能を備えた Windows プラットフォームが推奨されます。
|
||||
### プロジェクト構築
|
||||
|
||||
現在、ソフトウェアは Windows と macOS プラットフォームでのみ構築とテストが行われており、Linux プラットフォームでの正しい動作は保証できません。
|
||||
|
||||
```bash
|
||||
# For Windows
|
||||
# Windows 用
|
||||
npm run build:win
|
||||
# For macOS, not avaliable yet
|
||||
# macOS 用
|
||||
npm run build:mac
|
||||
# For Linux
|
||||
# Linux 用
|
||||
npm run build:linux
|
||||
```
|
||||
|
||||
注意: プラットフォームに応じて、プロジェクトルートディレクトリにある `electron-builder.yml` ファイルの設定内容を変更する必要があります:
|
||||
|
||||
```yml
|
||||
extraResources:
|
||||
# Windows用
|
||||
- from: ./caption-engine/dist/main-gummy.exe
|
||||
to: ./caption-engine/main-gummy.exe
|
||||
- from: ./caption-engine/dist/main-vosk.exe
|
||||
to: ./caption-engine/main-vosk.exe
|
||||
# macOSとLinux用
|
||||
# - from: ./caption-engine/dist/main-gummy
|
||||
# to: ./caption-engine/main-gummy
|
||||
# - from: ./caption-engine/dist/main-vosk
|
||||
# to: ./caption-engine/main-vosk
|
||||
```
|
||||
|
||||
|
Before Width: | Height: | Size: 373 KiB After Width: | Height: | Size: 462 KiB |
|
Before Width: | Height: | Size: 333 KiB After Width: | Height: | Size: 477 KiB |
BIN
assets/media/main_mac_en.png
Normal file
|
After Width: | Height: | Size: 2.7 MiB |
BIN
assets/media/main_mac_ja.png
Normal file
|
After Width: | Height: | Size: 2.7 MiB |
BIN
assets/media/main_mac_zh.png
Normal file
|
After Width: | Height: | Size: 2.8 MiB |
|
Before Width: | Height: | Size: 384 KiB After Width: | Height: | Size: 468 KiB |
|
Before Width: | Height: | Size: 324 KiB After Width: | Height: | Size: 324 KiB |
BIN
assets/media/vosk_en.png
Normal file
|
After Width: | Height: | Size: 73 KiB |
BIN
assets/media/vosk_ja.png
Normal file
|
After Width: | Height: | Size: 76 KiB |
BIN
assets/media/vosk_zh.png
Normal file
|
After Width: | Height: | Size: 79 KiB |
@@ -5,7 +5,9 @@
|
||||
The following icons are used under CC BY 4.0 license:
|
||||
|
||||
- icon.png
|
||||
- icon.svg
|
||||
- icon.icns
|
||||
|
||||
Source:
|
||||
|
||||
- https://icon-icons.com/en/pack/Duetone/2064
|
||||
- https://icon-icons.com/en/pack/Duetone/2064
|
||||
12
build/entitlements.mac.plist
Normal file
@@ -0,0 +1,12 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>com.apple.security.cs.allow-jit</key>
|
||||
<true/>
|
||||
<key>com.apple.security.cs.allow-unsigned-executable-memory</key>
|
||||
<true/>
|
||||
<key>com.apple.security.cs.allow-dyld-environment-variables</key>
|
||||
<true/>
|
||||
</dict>
|
||||
</plist>
|
||||
BIN
build/icon.icns
Normal file
BIN
build/icon.png
Normal file
|
After Width: | Height: | Size: 36 KiB |
1
build/icon.svg
Normal file
@@ -0,0 +1 @@
|
||||
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="6 6 52 52"><defs><style>.cls-1{fill:#a8d2f0;}.cls-2{fill:#389ad6;}.cls-3,.cls-4{fill:none;}.cls-4{stroke:#295183;stroke-linecap:round;stroke-linejoin:round;stroke-width:2px;}.cls-5{fill:#295183;}</style></defs><title>weather, forecast, direction, compass</title><path class="cls-1" d="M25.56,17.37c-.87,6.45-1.73,22.73,10.26,29.37A1.77,1.77,0,0,1,35.15,50C27.56,51,15,50,13.05,33.13a1.9,1.9,0,0,1,0-.21c0-1.24.11-13.46,10.07-17.41A1.77,1.77,0,0,1,25.56,17.37Z"/><path class="cls-2" d="M30.32,35l1,4.45a3.2,3.2,0,0,0-.22.72c-.1.46-.19.92-.29,1.38-.13.68-.39,1.49-1.06,1.67s-1.32-.44-1.55-1.11S28,40.72,27.84,40s-.76-1.33-1.45-1.26c-.34,0-.62.27-1,.32-.78.16-.31-1.79-.46-2.13a1.67,1.67,0,0,0-1.08-.82c-.91-.27-3.85-.37-3.06-2.07a1.68,1.68,0,0,1,1.07-.76,9.87,9.87,0,0,1,1.4-.32,3.94,3.94,0,0,0,1.26-.32l4.44,1,1.07.23Z"/><path class="cls-2" d="M30.32,28.31l-.24,1.07L29,29.62,27.26,30a1.83,1.83,0,0,0,.52-.8A6,6,0,0,0,28,28c0-.26.07-.5.12-.74a1.26,1.26,0,0,1,.1-.29Z"/><path class="cls-2" d="M34.62,29.37l0-.2.69-.43a2.66,2.66,0,0,1-.38.7Z"/><line class="cls-3" x1="33.74" y1="37.87" x2="33.45" y2="39.16"/><path class="cls-2" d="M37,35.79A4.71,4.71,0,0,1,36,36a7.51,7.51,0,0,0-1,.17,2.43,2.43,0,0,0-.37.13,2,2,0,0,0-.62.47l.4-1.78.23-1.07,1.07-.23Z"/><polyline class="cls-4" points="32 20.86 30.47 27.68 30.17 28.99 29.95 29.95 28.99 30.17 27.42 30.52 26.41 30.75 25.24 31.01 20.86 32 25 32.93 28.99 33.83 29.95 34.04 30.17 35.01 31.07 39.01 32 43.14 32.99 38.75 33.25 37.59 33.47 36.6 33.83 35 34.04 34.04 35 33.83 36.27 33.54 43.14 32 35.01 30.17 34.28 30.01 34.04 29.95 34 29.77 33.83 28.99 33.38 26.98"/><polygon class="cls-4" points="30.17 28.99 29.95 29.95 28.99 30.17 28.09 28.74 26.98 26.98 28.29 27.81 30.17 28.99"/><polygon class="cls-4" points="30.17 35.01 26.98 37.02 28.99 33.83 29.95 34.04 30.17 35.01"/><polygon class="cls-4" points="37.02 37.02 35.26 35.91 33.83 35 34.04 34.04 35 33.83 36.2 35.72 37.02 37.02"/><polygon class="cls-4" points="37.02 26.98 35.01 30.17 34.28 30.01 34.04 29.95 34 29.77 33.83 28.99 37.02 26.98"/><path class="cls-4" d="M38.42,14.13A19.08,19.08,0,1,1,32,13a19.19,19.19,0,0,1,2,.11"/><circle class="cls-5" cx="32.03" cy="16.99" r="1"/><circle class="cls-5" cx="47.01" cy="32.03" r="1"/><circle class="cls-5" cx="31.97" cy="47.01" r="1"/><circle class="cls-5" cx="16.99" cy="31.97" r="1"/></svg>
|
||||
|
After Width: | Height: | Size: 2.4 KiB |
2
caption-engine/audio2text/__init__.py
Normal file
@@ -0,0 +1,2 @@
|
||||
from dashscope.common.error import InvalidParameter
|
||||
from .gummy import GummyTranslator
|
||||
@@ -4,6 +4,7 @@ from dashscope.audio.asr import (
|
||||
TranslationResult,
|
||||
TranslationRecognizerRealtime
|
||||
)
|
||||
import dashscope
|
||||
from datetime import datetime
|
||||
import json
|
||||
import sys
|
||||
@@ -39,12 +40,12 @@ class Callback(TranslationRecognizerCallback):
|
||||
caption['text'] = transcription_result.text
|
||||
if caption['index'] != self.cur_id:
|
||||
self.cur_id = caption['index']
|
||||
cur_time = datetime.now().strftime('%H:%M:%S')
|
||||
cur_time = datetime.now().strftime('%H:%M:%S.%f')[:-3]
|
||||
caption['time_s'] = cur_time
|
||||
self.time_str = cur_time
|
||||
else:
|
||||
caption['time_s'] = self.time_str
|
||||
caption['time_t'] = datetime.now().strftime('%H:%M:%S')
|
||||
caption['time_t'] = datetime.now().strftime('%H:%M:%S.%f')[:-3]
|
||||
caption['translation'] = ""
|
||||
|
||||
if translation_result is not None:
|
||||
@@ -69,7 +70,17 @@ class Callback(TranslationRecognizerCallback):
|
||||
print(f"Error sending data to Node.js: {e}", file=sys.stderr)
|
||||
|
||||
class GummyTranslator:
|
||||
def __init__(self, rate, source, target):
|
||||
"""
|
||||
使用 Gummy 引擎流式处理的音频数据,并在标准输出中输出与 Auto Caption 软件可读取的 JSON 字符串数据
|
||||
|
||||
初始化参数:
|
||||
rate: 音频采样率
|
||||
source: 源语言代码字符串(zh, en, ja 等)
|
||||
target: 目标语言代码字符串(zh, en, ja 等)
|
||||
"""
|
||||
def __init__(self, rate, source, target, api_key):
|
||||
if api_key:
|
||||
dashscope.api_key = api_key
|
||||
self.translator = TranslationRecognizerRealtime(
|
||||
model = "gummy-realtime-v1",
|
||||
format = "pcm",
|
||||
@@ -80,3 +91,15 @@ class GummyTranslator:
|
||||
translation_target_languages = [target],
|
||||
callback = Callback()
|
||||
)
|
||||
|
||||
def start(self):
|
||||
"""启动 Gummy 引擎"""
|
||||
self.translator.start()
|
||||
|
||||
def send_audio_frame(self, data):
|
||||
"""发送音频帧"""
|
||||
self.translator.send_audio_frame(data)
|
||||
|
||||
def stop(self):
|
||||
"""停止 Gummy 引擎"""
|
||||
self.translator.stop()
|
||||
|
||||
1
caption-engine/audioprcs/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
from .process import mergeChunkChannels, resampleRawChunk, resampleMonoChunk
|
||||
68
caption-engine/audioprcs/process.py
Normal file
@@ -0,0 +1,68 @@
|
||||
import samplerate
|
||||
import numpy as np
|
||||
|
||||
def mergeChunkChannels(chunk, channels):
|
||||
"""
|
||||
将当前多通道音频数据块转换为单通道音频数据块
|
||||
|
||||
Args:
|
||||
chunk: (bytes)多通道音频数据块
|
||||
channels: 通道数
|
||||
|
||||
Returns:
|
||||
(bytes)单通道音频数据块
|
||||
"""
|
||||
# (length * channels,)
|
||||
chunk_np = np.frombuffer(chunk, dtype=np.int16)
|
||||
# (length, channels)
|
||||
chunk_np = chunk_np.reshape(-1, channels)
|
||||
# (length,)
|
||||
chunk_mono_f = np.mean(chunk_np.astype(np.float32), axis=1)
|
||||
chunk_mono = np.round(chunk_mono_f).astype(np.int16)
|
||||
return chunk_mono.tobytes()
|
||||
|
||||
|
||||
def resampleRawChunk(chunk, channels, orig_sr, target_sr, mode="sinc_best"):
|
||||
"""
|
||||
将当前多通道音频数据块转换成单通道音频数据块,然后进行重采样
|
||||
|
||||
Args:
|
||||
chunk: (bytes)多通道音频数据块
|
||||
channels: 通道数
|
||||
orig_sr: 原始采样率
|
||||
target_sr: 目标采样率
|
||||
mode: 重采样模式,可选:'sinc_best' | 'sinc_medium' | 'sinc_fastest' | 'zero_order_hold' | 'linear'
|
||||
|
||||
Return:
|
||||
(bytes)单通道音频数据块
|
||||
"""
|
||||
# (length * channels,)
|
||||
chunk_np = np.frombuffer(chunk, dtype=np.int16)
|
||||
# (length, channels)
|
||||
chunk_np = chunk_np.reshape(-1, channels)
|
||||
# (length,)
|
||||
chunk_mono_f = np.mean(chunk_np.astype(np.float32), axis=1)
|
||||
chunk_mono = chunk_mono_f.astype(np.int16)
|
||||
ratio = target_sr / orig_sr
|
||||
chunk_mono_r = samplerate.resample(chunk_mono, ratio, converter_type=mode)
|
||||
chunk_mono_r = np.round(chunk_mono_r).astype(np.int16)
|
||||
return chunk_mono_r.tobytes()
|
||||
|
||||
def resampleMonoChunk(chunk, orig_sr, target_sr, mode="sinc_best"):
|
||||
"""
|
||||
将当前单通道音频块进行重采样
|
||||
|
||||
Args:
|
||||
chunk: (bytes)单通道音频数据块
|
||||
orig_sr: 原始采样率
|
||||
target_sr: 目标采样率
|
||||
mode: 重采样模式,可选:'sinc_best' | 'sinc_medium' | 'sinc_fastest' | 'zero_order_hold' | 'linear'
|
||||
|
||||
Return:
|
||||
(bytes)单通道音频数据块
|
||||
"""
|
||||
chunk_np = np.frombuffer(chunk, dtype=np.int16)
|
||||
ratio = target_sr / orig_sr
|
||||
chunk_r = samplerate.resample(chunk_np, ratio, converter_type=mode)
|
||||
chunk_r = np.round(chunk_r).astype(np.int16)
|
||||
return chunk_r.tobytes()
|
||||
@@ -1,40 +1,43 @@
|
||||
import sys
|
||||
|
||||
if sys.platform == 'win32':
|
||||
from sysaudio.win import AudioStream, mergeStreamChannels
|
||||
elif sys.platform == 'linux':
|
||||
from sysaudio.linux import AudioStream, mergeStreamChannels
|
||||
else:
|
||||
raise NotImplementedError(f"Unsupported platform: {sys.platform}")
|
||||
|
||||
from audio2text.gummy import GummyTranslator
|
||||
import sys
|
||||
import argparse
|
||||
|
||||
def convert_audio_to_text(s_lang, t_lang, audio_type):
|
||||
if sys.platform == 'win32':
|
||||
from sysaudio.win import AudioStream
|
||||
elif sys.platform == 'darwin':
|
||||
from sysaudio.darwin import AudioStream
|
||||
elif sys.platform == 'linux':
|
||||
from sysaudio.linux import AudioStream
|
||||
else:
|
||||
raise NotImplementedError(f"Unsupported platform: {sys.platform}")
|
||||
|
||||
from audioprcs import mergeChunkChannels
|
||||
from audio2text import InvalidParameter, GummyTranslator
|
||||
|
||||
|
||||
def convert_audio_to_text(s_lang, t_lang, audio_type, chunk_rate, api_key):
|
||||
sys.stdout.reconfigure(line_buffering=True) # type: ignore
|
||||
stream = AudioStream(audio_type)
|
||||
stream.openStream()
|
||||
stream = AudioStream(audio_type, chunk_rate)
|
||||
|
||||
if t_lang == 'none':
|
||||
gummy = GummyTranslator(stream.RATE, s_lang, None)
|
||||
gummy = GummyTranslator(stream.RATE, s_lang, None, api_key)
|
||||
else:
|
||||
gummy = GummyTranslator(stream.RATE, s_lang, t_lang)
|
||||
gummy.translator.start()
|
||||
gummy = GummyTranslator(stream.RATE, s_lang, t_lang, api_key)
|
||||
|
||||
stream.openStream()
|
||||
gummy.start()
|
||||
|
||||
while True:
|
||||
try:
|
||||
if not stream.stream: continue
|
||||
data = stream.stream.read(stream.CHUNK)
|
||||
data = mergeStreamChannels(data, stream.CHANNELS)
|
||||
chunk = stream.read_chunk()
|
||||
chunk_mono = mergeChunkChannels(chunk, stream.CHANNELS)
|
||||
try:
|
||||
gummy.translator.send_audio_frame(data)
|
||||
except:
|
||||
gummy.translator.start()
|
||||
gummy.translator.send_audio_frame(data)
|
||||
gummy.send_audio_frame(chunk_mono)
|
||||
except InvalidParameter:
|
||||
gummy.start()
|
||||
gummy.send_audio_frame(chunk_mono)
|
||||
except KeyboardInterrupt:
|
||||
stream.closeStream()
|
||||
gummy.translator.stop()
|
||||
gummy.stop()
|
||||
break
|
||||
|
||||
|
||||
@@ -42,10 +45,14 @@ if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description='Convert system audio stream to text')
|
||||
parser.add_argument('-s', '--source_language', default='en', help='Source language code')
|
||||
parser.add_argument('-t', '--target_language', default='zh', help='Target language code')
|
||||
parser.add_argument('-a', '--audio_type', default='0', help='Audio stream source: 0 for output audio stream, 1 for input audio stream')
|
||||
parser.add_argument('-a', '--audio_type', default=0, help='Audio stream source: 0 for output audio stream, 1 for input audio stream')
|
||||
parser.add_argument('-c', '--chunk_rate', default=20, help='The number of audio stream chunks collected per second.')
|
||||
parser.add_argument('-k', '--api_key', default='', help='API KEY for Gummy model')
|
||||
args = parser.parse_args()
|
||||
convert_audio_to_text(
|
||||
args.source_language,
|
||||
args.target_language,
|
||||
0 if args.audio_type == '0' else 1
|
||||
int(args.audio_type),
|
||||
int(args.chunk_rate),
|
||||
args.api_key
|
||||
)
|
||||
|
||||
83
caption-engine/main-vosk.py
Normal file
@@ -0,0 +1,83 @@
|
||||
import sys
|
||||
import json
|
||||
import argparse
|
||||
from datetime import datetime
|
||||
import numpy.core.multiarray
|
||||
|
||||
if sys.platform == 'win32':
|
||||
from sysaudio.win import AudioStream
|
||||
elif sys.platform == 'darwin':
|
||||
from sysaudio.darwin import AudioStream
|
||||
elif sys.platform == 'linux':
|
||||
from sysaudio.linux import AudioStream
|
||||
else:
|
||||
raise NotImplementedError(f"Unsupported platform: {sys.platform}")
|
||||
|
||||
from vosk import Model, KaldiRecognizer, SetLogLevel
|
||||
from audioprcs import resampleRawChunk
|
||||
|
||||
SetLogLevel(-1)
|
||||
|
||||
def convert_audio_to_text(audio_type, chunk_rate, model_path):
|
||||
sys.stdout.reconfigure(line_buffering=True) # type: ignore
|
||||
|
||||
if model_path.startswith('"'):
|
||||
model_path = model_path[1:]
|
||||
if model_path.endswith('"'):
|
||||
model_path = model_path[:-1]
|
||||
|
||||
model = Model(model_path)
|
||||
recognizer = KaldiRecognizer(model, 16000)
|
||||
|
||||
stream = AudioStream(audio_type, chunk_rate)
|
||||
stream.openStream()
|
||||
|
||||
time_str = ''
|
||||
cur_id = 0
|
||||
prev_content = ''
|
||||
|
||||
while True:
|
||||
chunk = stream.read_chunk()
|
||||
chunk_mono = resampleRawChunk(chunk, stream.CHANNELS, stream.RATE, 16000)
|
||||
|
||||
caption = {}
|
||||
if recognizer.AcceptWaveform(chunk_mono):
|
||||
content = json.loads(recognizer.Result()).get('text', '')
|
||||
caption['index'] = cur_id
|
||||
caption['text'] = content
|
||||
caption['time_s'] = time_str
|
||||
caption['time_t'] = datetime.now().strftime('%H:%M:%S.%f')[:-3]
|
||||
caption['translation'] = ''
|
||||
prev_content = ''
|
||||
cur_id += 1
|
||||
else:
|
||||
content = json.loads(recognizer.PartialResult()).get('partial', '')
|
||||
if content == '' or content == prev_content:
|
||||
continue
|
||||
if prev_content == '':
|
||||
time_str = datetime.now().strftime('%H:%M:%S.%f')[:-3]
|
||||
caption['index'] = cur_id
|
||||
caption['text'] = content
|
||||
caption['time_s'] = time_str
|
||||
caption['time_t'] = datetime.now().strftime('%H:%M:%S.%f')[:-3]
|
||||
caption['translation'] = ''
|
||||
prev_content = content
|
||||
try:
|
||||
json_str = json.dumps(caption) + '\n'
|
||||
sys.stdout.write(json_str)
|
||||
sys.stdout.flush()
|
||||
except Exception as e:
|
||||
print(e)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description='Convert system audio stream to text')
|
||||
parser.add_argument('-a', '--audio_type', default=0, help='Audio stream source: 0 for output audio stream, 1 for input audio stream')
|
||||
parser.add_argument('-c', '--chunk_rate', default=20, help='The number of audio stream chunks collected per second.')
|
||||
parser.add_argument('-m', '--model_path', default='', help='The path to the vosk model.')
|
||||
args = parser.parse_args()
|
||||
convert_audio_to_text(
|
||||
int(args.audio_type),
|
||||
int(args.chunk_rate),
|
||||
args.model_path
|
||||
)
|
||||
42
caption-engine/main-vosk.spec
Normal file
@@ -0,0 +1,42 @@
|
||||
# -*- mode: python ; coding: utf-8 -*-
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
vosk_path = str(Path('./subenv/Lib/site-packages/vosk').resolve())
|
||||
|
||||
a = Analysis(
|
||||
['main-vosk.py'],
|
||||
pathex=[],
|
||||
binaries=[],
|
||||
datas=[(vosk_path, 'vosk')],
|
||||
hiddenimports=[],
|
||||
hookspath=[],
|
||||
hooksconfig={},
|
||||
runtime_hooks=[],
|
||||
excludes=[],
|
||||
noarchive=False,
|
||||
optimize=0,
|
||||
)
|
||||
|
||||
pyz = PYZ(a.pure)
|
||||
|
||||
exe = EXE(
|
||||
pyz,
|
||||
a.scripts,
|
||||
a.binaries,
|
||||
a.datas,
|
||||
[],
|
||||
name='main-vosk',
|
||||
debug=False,
|
||||
bootloader_ignore_signals=False,
|
||||
strip=False,
|
||||
upx=True,
|
||||
upx_exclude=[],
|
||||
runtime_tmpdir=None,
|
||||
console=True,
|
||||
disable_windowed_traceback=False,
|
||||
argv_emulation=False,
|
||||
target_arch=None,
|
||||
codesign_identity=None,
|
||||
entitlements_file=None,
|
||||
)
|
||||
@@ -1,5 +1,7 @@
|
||||
dashscope==1.23.5
|
||||
numpy==2.2.6
|
||||
PyAudio==0.2.14
|
||||
PyAudioWPatch==0.2.12.7 # Windows only
|
||||
pyinstaller==6.14.1
|
||||
dashscope
|
||||
numpy
|
||||
samplerate
|
||||
PyAudio
|
||||
PyAudioWPatch # Windows only
|
||||
vosk
|
||||
pyinstaller
|
||||
|
||||
0
caption-engine/sysaudio/__init__.py
Normal file
85
caption-engine/sysaudio/darwin.py
Normal file
@@ -0,0 +1,85 @@
|
||||
"""获取 MacOS 系统音频输入/输出流"""
|
||||
|
||||
import pyaudio
|
||||
|
||||
|
||||
class AudioStream:
|
||||
"""
|
||||
获取系统音频流(支持 BlackHole 作为系统音频输出捕获)
|
||||
|
||||
初始化参数:
|
||||
audio_type: 0-系统音频输出流(需配合 BlackHole),1-系统音频输入流
|
||||
chunk_rate: 每秒采集音频块的数量,默认为20
|
||||
"""
|
||||
def __init__(self, audio_type=0, chunk_rate=20):
|
||||
self.audio_type = audio_type
|
||||
self.mic = pyaudio.PyAudio()
|
||||
if self.audio_type == 0:
|
||||
self.device = self.getOutputDeviceInfo()
|
||||
else:
|
||||
self.device = self.mic.get_default_input_device_info()
|
||||
self.stream = None
|
||||
self.SAMP_WIDTH = pyaudio.get_sample_size(pyaudio.paInt16)
|
||||
self.FORMAT = pyaudio.paInt16
|
||||
self.CHANNELS = self.device["maxInputChannels"]
|
||||
self.RATE = int(self.device["defaultSampleRate"])
|
||||
self.CHUNK = self.RATE // chunk_rate
|
||||
self.INDEX = self.device["index"]
|
||||
|
||||
def getOutputDeviceInfo(self):
|
||||
"""查找指定关键词的输入设备"""
|
||||
device_count = self.mic.get_device_count()
|
||||
for i in range(device_count):
|
||||
dev_info = self.mic.get_device_info_by_index(i)
|
||||
if 'blackhole' in dev_info["name"].lower():
|
||||
return dev_info
|
||||
raise Exception("The device containing BlackHole was not found.")
|
||||
|
||||
def printInfo(self):
|
||||
dev_info = f"""
|
||||
采样输入设备:
|
||||
- 设备类型:{ "音频输出" if self.audio_type == 0 else "音频输入" }
|
||||
- 序号:{self.device['index']}
|
||||
- 名称:{self.device['name']}
|
||||
- 最大输入通道数:{self.device['maxInputChannels']}
|
||||
- 默认低输入延迟:{self.device['defaultLowInputLatency']}s
|
||||
- 默认高输入延迟:{self.device['defaultHighInputLatency']}s
|
||||
- 默认采样率:{self.device['defaultSampleRate']}Hz
|
||||
|
||||
音频样本块大小:{self.CHUNK}
|
||||
样本位宽:{self.SAMP_WIDTH}
|
||||
采样格式:{self.FORMAT}
|
||||
音频通道数:{self.CHANNELS}
|
||||
音频采样率:{self.RATE}
|
||||
"""
|
||||
print(dev_info)
|
||||
|
||||
def openStream(self):
|
||||
"""
|
||||
打开并返回系统音频输出流
|
||||
"""
|
||||
if self.stream: return self.stream
|
||||
self.stream = self.mic.open(
|
||||
format = self.FORMAT,
|
||||
channels = int(self.CHANNELS),
|
||||
rate = self.RATE,
|
||||
input = True,
|
||||
input_device_index = int(self.INDEX)
|
||||
)
|
||||
return self.stream
|
||||
|
||||
def read_chunk(self):
|
||||
"""
|
||||
读取音频数据
|
||||
"""
|
||||
if not self.stream: return None
|
||||
return self.stream.read(self.CHUNK, exception_on_overflow=False)
|
||||
|
||||
def closeStream(self):
|
||||
"""
|
||||
关闭系统音频输出流
|
||||
"""
|
||||
if self.stream is None: return
|
||||
self.stream.stop_stream()
|
||||
self.stream.close()
|
||||
self.stream = None
|
||||
@@ -1,30 +1,17 @@
|
||||
"""获取 Linux 系统音频输入流"""
|
||||
|
||||
import pyaudio
|
||||
import numpy as np
|
||||
|
||||
def mergeStreamChannels(data, channels):
|
||||
"""
|
||||
将当前多通道流数据合并为单通道流数据
|
||||
|
||||
Args:
|
||||
data: 多通道数据
|
||||
channels: 通道数
|
||||
|
||||
Returns:
|
||||
mono_data_bytes: 单通道数据
|
||||
"""
|
||||
# (length * channels,)
|
||||
data_np = np.frombuffer(data, dtype=np.int16)
|
||||
# (length, channels)
|
||||
data_np_r = data_np.reshape(-1, channels)
|
||||
# (length,)
|
||||
mono_data = np.mean(data_np_r.astype(np.float32), axis=1)
|
||||
mono_data = mono_data.astype(np.int16)
|
||||
mono_data_bytes = mono_data.tobytes()
|
||||
return mono_data_bytes
|
||||
|
||||
|
||||
class AudioStream:
|
||||
def __init__(self, audio_type=1):
|
||||
"""
|
||||
获取系统音频流
|
||||
|
||||
初始化参数:
|
||||
audio_type: 0-系统音频输出流(不支持,不会生效),1-系统音频输入流(默认)
|
||||
chunk_rate: 每秒采集音频块的数量,默认为20
|
||||
"""
|
||||
def __init__(self, audio_type=1, chunk_rate=20):
|
||||
self.audio_type = audio_type
|
||||
self.mic = pyaudio.PyAudio()
|
||||
self.device = self.mic.get_default_input_device_info()
|
||||
@@ -33,7 +20,7 @@ class AudioStream:
|
||||
self.FORMAT = pyaudio.paInt16
|
||||
self.CHANNELS = self.device["maxInputChannels"]
|
||||
self.RATE = int(self.device["defaultSampleRate"])
|
||||
self.CHUNK = self.RATE // 20
|
||||
self.CHUNK = self.RATE // chunk_rate
|
||||
self.INDEX = self.device["index"]
|
||||
|
||||
def printInfo(self):
|
||||
@@ -49,7 +36,7 @@ class AudioStream:
|
||||
|
||||
音频样本块大小:{self.CHUNK}
|
||||
样本位宽:{self.SAMP_WIDTH}
|
||||
音频数据格式:{self.FORMAT}
|
||||
采样格式:{self.FORMAT}
|
||||
音频通道数:{self.CHANNELS}
|
||||
音频采样率:{self.RATE}
|
||||
"""
|
||||
@@ -62,13 +49,20 @@ class AudioStream:
|
||||
if self.stream: return self.stream
|
||||
self.stream = self.mic.open(
|
||||
format = self.FORMAT,
|
||||
channels = self.CHANNELS,
|
||||
channels = int(self.CHANNELS),
|
||||
rate = self.RATE,
|
||||
input = True,
|
||||
input_device_index = self.INDEX
|
||||
input_device_index = int(self.INDEX)
|
||||
)
|
||||
return self.stream
|
||||
|
||||
|
||||
def read_chunk(self):
|
||||
"""
|
||||
读取音频数据
|
||||
"""
|
||||
if not self.stream: return None
|
||||
return self.stream.read(self.CHUNK)
|
||||
|
||||
def closeStream(self):
|
||||
"""
|
||||
关闭系统音频输出流
|
||||
@@ -76,4 +70,4 @@ class AudioStream:
|
||||
if self.stream is None: return
|
||||
self.stream.stop_stream()
|
||||
self.stream.close()
|
||||
self.stream = None
|
||||
self.stream = None
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
"""获取 Windows 系统音频输出流"""
|
||||
"""获取 Windows 系统音频输入/输出流"""
|
||||
|
||||
import pyaudiowpatch as pyaudio
|
||||
import numpy as np
|
||||
|
||||
|
||||
def getDefaultLoopbackDevice(mic: pyaudio.PyAudio, info = True)->dict:
|
||||
@@ -40,35 +39,15 @@ def getDefaultLoopbackDevice(mic: pyaudio.PyAudio, info = True)->dict:
|
||||
return default_speaker
|
||||
|
||||
|
||||
def mergeStreamChannels(data, channels):
|
||||
"""
|
||||
将当前多通道流数据合并为单通道流数据
|
||||
|
||||
Args:
|
||||
data: 多通道数据
|
||||
channels: 通道数
|
||||
|
||||
Returns:
|
||||
mono_data_bytes: 单通道数据
|
||||
"""
|
||||
# (length * channels,)
|
||||
data_np = np.frombuffer(data, dtype=np.int16)
|
||||
# (length, channels)
|
||||
data_np_r = data_np.reshape(-1, channels)
|
||||
# (length,)
|
||||
mono_data = np.mean(data_np_r.astype(np.float32), axis=1)
|
||||
mono_data = mono_data.astype(np.int16)
|
||||
mono_data_bytes = mono_data.tobytes()
|
||||
return mono_data_bytes
|
||||
|
||||
class AudioStream:
|
||||
"""
|
||||
获取系统音频流
|
||||
|
||||
参数:
|
||||
audio_type: (默认)0-系统音频输出流,1-系统音频输入流
|
||||
初始化参数:
|
||||
audio_type: 0-系统音频输出流(默认),1-系统音频输入流
|
||||
chunk_rate: 每秒采集音频块的数量,默认为20
|
||||
"""
|
||||
def __init__(self, audio_type=0):
|
||||
def __init__(self, audio_type=0, chunk_rate=20):
|
||||
self.audio_type = audio_type
|
||||
self.mic = pyaudio.PyAudio()
|
||||
if self.audio_type == 0:
|
||||
@@ -78,15 +57,15 @@ class AudioStream:
|
||||
self.stream = None
|
||||
self.SAMP_WIDTH = pyaudio.get_sample_size(pyaudio.paInt16)
|
||||
self.FORMAT = pyaudio.paInt16
|
||||
self.CHANNELS = self.device["maxInputChannels"]
|
||||
self.CHANNELS = int(self.device["maxInputChannels"])
|
||||
self.RATE = int(self.device["defaultSampleRate"])
|
||||
self.CHUNK = self.RATE // 20
|
||||
self.CHUNK = self.RATE // chunk_rate
|
||||
self.INDEX = self.device["index"]
|
||||
|
||||
def printInfo(self):
|
||||
dev_info = f"""
|
||||
采样设备:
|
||||
- 设备类型:{ "音频输入" if self.audio_type == 0 else "音频输出" }
|
||||
- 设备类型:{ "音频输出" if self.audio_type == 0 else "音频输入" }
|
||||
- 序号:{self.device['index']}
|
||||
- 名称:{self.device['name']}
|
||||
- 最大输入通道数:{self.device['maxInputChannels']}
|
||||
@@ -97,7 +76,7 @@ class AudioStream:
|
||||
|
||||
音频样本块大小:{self.CHUNK}
|
||||
样本位宽:{self.SAMP_WIDTH}
|
||||
音频数据格式:{self.FORMAT}
|
||||
采样格式:{self.FORMAT}
|
||||
音频通道数:{self.CHANNELS}
|
||||
音频采样率:{self.RATE}
|
||||
"""
|
||||
@@ -117,6 +96,13 @@ class AudioStream:
|
||||
)
|
||||
return self.stream
|
||||
|
||||
def read_chunk(self):
|
||||
"""
|
||||
读取音频数据
|
||||
"""
|
||||
if not self.stream: return None
|
||||
return self.stream.read(self.CHUNK, exception_on_overflow=False)
|
||||
|
||||
def closeStream(self):
|
||||
"""
|
||||
关闭系统音频输出流
|
||||
|
||||
@@ -29,6 +29,7 @@
|
||||
|
||||
### 新增功能
|
||||
|
||||
- 添加长字幕内容隐藏功能 (#1)
|
||||
- 添加多界面语言支持(中文、英语、日语)
|
||||
- 添加暗色主题
|
||||
|
||||
@@ -40,10 +41,49 @@
|
||||
|
||||
### 修复bug
|
||||
|
||||
- 添加字幕引擎长时间空置后报错的问题
|
||||
- 添加字幕引擎长时间空置后报错的问题 (#2)
|
||||
|
||||
### 新增文档
|
||||
|
||||
- 新增日语说明文档
|
||||
- 新增英语、日语字幕引擎说明文档和用户手册
|
||||
- 新增 electron ipc api 文档
|
||||
|
||||
## v0.3.0
|
||||
|
||||
2025-07-09
|
||||
|
||||
对字幕引擎代码进行了重构,软件适配了 macOS 平台,添加了新功能。
|
||||
|
||||
### 新增功能
|
||||
|
||||
- 添加软件内设置 API KEY 的功能
|
||||
- 添加字幕字体粗细和文本阴影的设置
|
||||
- 添加复制字幕记录到剪贴板的功能 (#3)
|
||||
|
||||
### 优化体验
|
||||
|
||||
- 字幕时间记录精确到毫秒
|
||||
- 更详细的说明文档(添加字幕引擎规格说明、用户文档和字幕引擎文档更新) (#4)
|
||||
- 适配 macOS 平台
|
||||
- 字幕窗口有了更大的顶置优先级
|
||||
- 预览窗口可以实时显示最新的字幕内容
|
||||
|
||||
### 修复bug
|
||||
|
||||
- 修复使用系统主题时暗色系统载入为亮色的问题
|
||||
|
||||
## v0.4.0
|
||||
|
||||
2025-07-11
|
||||
|
||||
添加了 Vosk 本地字幕引擎,更新了项目文档,继续优化使用体验。
|
||||
|
||||
### 新增功能
|
||||
|
||||
- 添加了基于 Vosk 的字幕引擎, **当前 Vosk 字幕引擎暂不支持翻译**
|
||||
- 更新用户界面,增加 Vosk 引擎选项和模型路径设置
|
||||
|
||||
### 优化体验
|
||||
|
||||
- 字幕窗口右上角图标的颜色改为和字幕原文字体颜色一致
|
||||
|
||||
21
docs/TODO.md
@@ -1,6 +1,23 @@
|
||||
## 已完成
|
||||
|
||||
- [x] 添加英语和日语语言支持 *2025/07/04*
|
||||
- [x] 添加暗色主题 *2025/07/04*
|
||||
- [x] 优化长字幕显示效果 *2025/07/05*
|
||||
- [x] 修复字幕引擎空置报错的问题 *2025/07/05*
|
||||
- [ ] 添加更多字幕引擎
|
||||
- [ ] 减小软件体积
|
||||
- [x] 增强字幕窗口顶置优先级 *2025/07/07*
|
||||
- [x] 添加对自带字幕引擎的详细规格说明 *2025/07/07*
|
||||
- [x] 添加复制字幕到剪贴板功能 *2025/07/08*
|
||||
- [x] 适配 macOS 平台 *2025/07/08*
|
||||
- [x] 添加字幕文字描边 *2025/07/09*
|
||||
- [x] 添加基于 Vosk 的字幕引擎 *2025/07/09*
|
||||
|
||||
## 待完成
|
||||
|
||||
- [ ] 添加 Ollama 模型用于本地字幕引擎的翻译
|
||||
- [ ] 添加本地字幕引擎
|
||||
- [ ] 验证 / 添加基于 FunASR 的字幕引擎
|
||||
- [ ] 减小软件不必要的体积
|
||||
|
||||
## 遥远的未来
|
||||
|
||||
- [ ] 使用 Tauri 框架重新开发
|
||||
|
||||
@@ -20,11 +20,11 @@
|
||||
|
||||
### `both.window.mounted`
|
||||
|
||||
**介绍:**前端窗口挂载完毕,请求最新的配置数据
|
||||
**介绍:** 前端窗口挂载完毕,请求最新的配置数据
|
||||
|
||||
**发起方:**前端
|
||||
**发起方:** 前端
|
||||
|
||||
**接收方:**后端
|
||||
**接收方:** 后端
|
||||
|
||||
**数据类型:**
|
||||
|
||||
@@ -33,11 +33,24 @@
|
||||
|
||||
### `control.nativeTheme.get`
|
||||
|
||||
**介绍:**前端获取系统当前的主题
|
||||
**介绍:** 前端获取系统当前的主题
|
||||
|
||||
**发起方:**前端控制窗口
|
||||
**发起方:** 前端控制窗口
|
||||
|
||||
**接收方:**后端控制窗口实例
|
||||
**接收方:** 后端控制窗口实例
|
||||
|
||||
**数据类型:**
|
||||
|
||||
- 发送:无数据
|
||||
- 接收:`string`
|
||||
|
||||
### `control.folder.select`
|
||||
|
||||
**介绍:** 打开文件夹选择器,并将用户选择的文件夹路径返回给前端
|
||||
|
||||
**发起方:** 前端控制窗口
|
||||
|
||||
**接收方:** 后端控制窗口实例
|
||||
|
||||
**数据类型:**
|
||||
|
||||
@@ -48,242 +61,242 @@
|
||||
|
||||
### `control.uiLanguage.change`
|
||||
|
||||
**介绍:**前端修改字界面语言,将修改同步给后端
|
||||
**介绍:** 前端修改字界面语言,将修改同步给后端
|
||||
|
||||
**发起方:**前端控制窗口
|
||||
**发起方:** 前端控制窗口
|
||||
|
||||
**接收方:**后端控制窗口实例
|
||||
**接收方:** 后端控制窗口实例
|
||||
|
||||
**数据类型:**`UILanguage`
|
||||
**数据类型:** `UILanguage`
|
||||
|
||||
### `control.uiTheme.change`
|
||||
|
||||
**介绍:**前端修改字界面主题,将修改同步给后端
|
||||
**介绍:** 前端修改字界面主题,将修改同步给后端
|
||||
|
||||
**发起方:**前端控制窗口
|
||||
**发起方:** 前端控制窗口
|
||||
|
||||
**接收方:**后端控制窗口实例
|
||||
**接收方:** 后端控制窗口实例
|
||||
|
||||
**数据类型:**`UITheme`
|
||||
**数据类型:** `UITheme`
|
||||
|
||||
### `control.leftBarWidth.change`
|
||||
|
||||
**介绍:**前端修改边栏宽度,将修改同步给后端
|
||||
**介绍:** 前端修改边栏宽度,将修改同步给后端
|
||||
|
||||
**发起方:**前端控制窗口
|
||||
**发起方:** 前端控制窗口
|
||||
|
||||
**接收方:**后端控制窗口实例
|
||||
**接收方:** 后端控制窗口实例
|
||||
|
||||
**数据类型:**`number`
|
||||
**数据类型:** `number`
|
||||
|
||||
### `control.captionLog.clear`
|
||||
|
||||
**介绍:**清空字幕记录
|
||||
**介绍:** 清空字幕记录
|
||||
|
||||
**发起方:**前端控制窗口
|
||||
**发起方:** 前端控制窗口
|
||||
|
||||
**接收方:**后端控制窗口实例
|
||||
**接收方:** 后端控制窗口实例
|
||||
|
||||
**数据类型:**无数据
|
||||
**数据类型:** 无数据
|
||||
|
||||
### `control.styles.change`
|
||||
|
||||
**介绍:**前端修改字幕样式,将修改同步给后端
|
||||
**介绍:** 前端修改字幕样式,将修改同步给后端
|
||||
|
||||
**发起方:**前端控制窗口
|
||||
**发起方:** 前端控制窗口
|
||||
|
||||
**接收方:**后端控制窗口实例
|
||||
**接收方:** 后端控制窗口实例
|
||||
|
||||
**数据类型:**`Styles`
|
||||
**数据类型:** `Styles`
|
||||
|
||||
### `control.styles.reset`
|
||||
|
||||
**介绍:**将字幕样式恢复为默认
|
||||
**介绍:** 将字幕样式恢复为默认
|
||||
|
||||
**发起方:**前端控制窗口
|
||||
**发起方:** 前端控制窗口
|
||||
|
||||
**接收方:**后端控制窗口实例
|
||||
**接收方:** 后端控制窗口实例
|
||||
|
||||
**数据类型:**无数据
|
||||
**数据类型:** 无数据
|
||||
|
||||
### `control.controls.change`
|
||||
|
||||
**介绍:**前端修改了字幕引擎配置,将最新配置发送给后端
|
||||
**介绍:** 前端修改了字幕引擎配置,将最新配置发送给后端
|
||||
|
||||
**发起方:**前端控制窗口
|
||||
**发起方:** 前端控制窗口
|
||||
|
||||
**接收方:**后端控制窗口实例
|
||||
**接收方:** 后端控制窗口实例
|
||||
|
||||
**数据类型:**`Controls`
|
||||
**数据类型:** `Controls`
|
||||
|
||||
### `control.captionWindow.activate`
|
||||
|
||||
**介绍:**激活字幕窗口
|
||||
**介绍:** 激活字幕窗口
|
||||
|
||||
**发起方:**前端控制窗口
|
||||
**发起方:** 前端控制窗口
|
||||
|
||||
**接收方:**后端控制窗口实例
|
||||
**接收方:** 后端控制窗口实例
|
||||
|
||||
**数据类型:**无数据
|
||||
**数据类型:** 无数据
|
||||
|
||||
### `control.engine.start`
|
||||
|
||||
**介绍:**启动字幕引擎
|
||||
**介绍:** 启动字幕引擎
|
||||
|
||||
**发起方:**前端控制窗口
|
||||
**发起方:** 前端控制窗口
|
||||
|
||||
**接收方:**后端控制窗口实例
|
||||
**接收方:** 后端控制窗口实例
|
||||
|
||||
**数据类型:**无数据
|
||||
**数据类型:** 无数据
|
||||
|
||||
### `control.engine.stop`
|
||||
|
||||
**介绍:**关闭字幕引擎
|
||||
**介绍:** 关闭字幕引擎
|
||||
|
||||
**发起方:**前端控制窗口
|
||||
**发起方:** 前端控制窗口
|
||||
|
||||
**接收方:**后端控制窗口实例
|
||||
**接收方:** 后端控制窗口实例
|
||||
|
||||
**数据类型:**无数据
|
||||
**数据类型:** 无数据
|
||||
|
||||
### `caption.windowHeight.change`
|
||||
|
||||
**介绍:**字幕窗口宽度发生改变
|
||||
**介绍:** 字幕窗口宽度发生改变
|
||||
|
||||
**发起方:**前端字幕窗口
|
||||
**发起方:** 前端字幕窗口
|
||||
|
||||
**接收方:**后端字幕窗口实例
|
||||
**接收方:** 后端字幕窗口实例
|
||||
|
||||
**数据类型:**`number`
|
||||
**数据类型:** `number`
|
||||
|
||||
### `caption.pin.set`
|
||||
|
||||
**介绍:**是否将窗口置顶
|
||||
**介绍:** 是否将窗口置顶
|
||||
|
||||
**发起方:**前端字幕窗口
|
||||
**发起方:** 前端字幕窗口
|
||||
|
||||
**接收方:**后端字幕窗口实例
|
||||
**接收方:** 后端字幕窗口实例
|
||||
|
||||
**数据类型:**`boolean`
|
||||
**数据类型:** `boolean`
|
||||
|
||||
### `caption.controlWindow.activate`
|
||||
|
||||
**介绍:**激活控制窗口
|
||||
**介绍:** 激活控制窗口
|
||||
|
||||
**发起方:**前端字幕窗口
|
||||
**发起方:** 前端字幕窗口
|
||||
|
||||
**接收方:**后端字幕窗口实例
|
||||
**接收方:** 后端字幕窗口实例
|
||||
|
||||
**数据类型:**无数据
|
||||
**数据类型:** 无数据
|
||||
|
||||
### `caption.window.close`
|
||||
|
||||
**介绍:**关闭字幕窗口
|
||||
**介绍:** 关闭字幕窗口
|
||||
|
||||
**发起方:**前端字幕窗口
|
||||
**发起方:** 前端字幕窗口
|
||||
|
||||
**接收方:**后端字幕窗口实例
|
||||
**接收方:** 后端字幕窗口实例
|
||||
|
||||
**数据类型:**无数据
|
||||
**数据类型:** 无数据
|
||||
|
||||
## 后端 ==> 前端
|
||||
|
||||
### `control.uiLanguage.set`
|
||||
|
||||
**介绍:**后端将最新界面语言发送给前端,前端进行设置
|
||||
**介绍:** 后端将最新界面语言发送给前端,前端进行设置
|
||||
|
||||
**发起方:**后端
|
||||
**发起方:** 后端
|
||||
|
||||
**接收方:**字幕窗口
|
||||
**接收方:** 字幕窗口
|
||||
|
||||
**数据类型:**`UILanguage`
|
||||
**数据类型:** `UILanguage`
|
||||
|
||||
### `control.nativeTheme.change`
|
||||
|
||||
**介绍:**系统主题发生改变
|
||||
**介绍:** 系统主题发生改变
|
||||
|
||||
**发起方:**后端
|
||||
**发起方:** 后端
|
||||
|
||||
**接收方:**前端控制窗口
|
||||
**接收方:** 前端控制窗口
|
||||
|
||||
**数据类型:**`string`
|
||||
**数据类型:** `string`
|
||||
|
||||
### `control.engine.started`
|
||||
|
||||
**介绍:**引擎启动成功
|
||||
**介绍:** 引擎启动成功
|
||||
|
||||
**发起方:**后端
|
||||
**发起方:** 后端
|
||||
|
||||
**接收方:**前端控制窗口
|
||||
**接收方:** 前端控制窗口
|
||||
|
||||
**数据类型:**无数据
|
||||
**数据类型:** 无数据
|
||||
|
||||
### `control.engine.stopped`
|
||||
|
||||
**介绍:**引擎关闭
|
||||
**介绍:** 引擎关闭
|
||||
|
||||
**发起方:**后端
|
||||
**发起方:** 后端
|
||||
|
||||
**接收方:**前端控制窗口
|
||||
**接收方:** 前端控制窗口
|
||||
|
||||
**数据类型:**无数据
|
||||
**数据类型:** 无数据
|
||||
|
||||
### `control.error.occurred`
|
||||
|
||||
**介绍:**发送错误
|
||||
**介绍:** 发送错误
|
||||
|
||||
**发起方:**后端
|
||||
**发起方:** 后端
|
||||
|
||||
**接收方:**前端控制窗口
|
||||
**接收方:** 前端控制窗口
|
||||
|
||||
**数据类型:**`string`
|
||||
**数据类型:** `string`
|
||||
|
||||
### `control.controls.set`
|
||||
|
||||
**介绍:**后端将最新字幕引擎配置发送给前端,前端进行设置
|
||||
**介绍:** 后端将最新字幕引擎配置发送给前端,前端进行设置
|
||||
|
||||
**发起方:**后端
|
||||
**发起方:** 后端
|
||||
|
||||
**接收方:**前端控制窗口
|
||||
**接收方:** 前端控制窗口
|
||||
|
||||
**数据类型:**`Controls`
|
||||
**数据类型:** `Controls`
|
||||
|
||||
### `both.styles.set`
|
||||
|
||||
**介绍:**后端将最新字幕样式发送给前端,前端进行设置
|
||||
**介绍:** 后端将最新字幕样式发送给前端,前端进行设置
|
||||
|
||||
**发起方:**后端
|
||||
**发起方:** 后端
|
||||
|
||||
**接收方:**前端
|
||||
**接收方:** 前端
|
||||
|
||||
**数据类型:**`Styles`
|
||||
**数据类型:** `Styles`
|
||||
|
||||
### `both.captionLog.add`
|
||||
|
||||
**介绍:**添加一条新的字幕数据
|
||||
**介绍:** 添加一条新的字幕数据
|
||||
|
||||
**发起方:**后端
|
||||
**发起方:** 后端
|
||||
|
||||
**接收方:**前端
|
||||
**接收方:** 前端
|
||||
|
||||
**数据类型:**`CaptionItem`
|
||||
**数据类型:** `CaptionItem`
|
||||
|
||||
### `both.captionLog.upd`
|
||||
|
||||
**介绍:**更新最后一条字幕数据
|
||||
**介绍:** 更新最后一条字幕数据
|
||||
|
||||
**发起方:**后端
|
||||
**发起方:** 后端
|
||||
|
||||
**接收方:**前端
|
||||
**接收方:** 前端
|
||||
|
||||
**数据类型:**`CaptionItem`
|
||||
**数据类型:** `CaptionItem`
|
||||
|
||||
### `both.captionLog.set`
|
||||
|
||||
**介绍:**设置全部的字幕数据
|
||||
**介绍:** 设置全部的字幕数据
|
||||
|
||||
**发起方:**后端
|
||||
**发起方:** 后端
|
||||
|
||||
**接收方:**前端
|
||||
**接收方:** 前端
|
||||
|
||||
**数据类型:**`CaptionItem[]`
|
||||
**数据类型:** `CaptionItem[]`
|
||||
|
||||
@@ -1,67 +1,106 @@
|
||||
# Caption Engine Documentation
|
||||
|
||||
Corresponding Version: v0.4.0
|
||||
|
||||

|
||||
|
||||
## Introduction to the Caption Engine
|
||||
|
||||
The so-called caption engine is actually a subprocess that fetches real-time streaming audio data from system audio input (recording) or output (playing sound) and calls an audio-to-text model to generate captions for the corresponding audio. The generated captions are converted into JSON formatted string data and passed to the main program via standard output (it must be ensured that the string read by the main program can be correctly interpreted as a JSON object). The main program reads and interprets the caption data, processes it, and displays it on the window.
|
||||
The so-called caption engine is actually a subprogram that captures real-time streaming data from the system's audio input (recording) or output (playing sound) and calls an audio-to-text model to generate captions for the corresponding audio. The generated captions are converted into a JSON-formatted string and passed to the main program through standard output (it must be ensured that the string read by the main program can be correctly interpreted as a JSON object). The main program reads and interprets the caption data, processes it, and then displays it on the window.
|
||||
|
||||
## Features the Caption Engine Needs to Implement
|
||||
## Functions Required by the Caption Engine
|
||||
|
||||
### Audio Acquisition
|
||||
|
||||
First, your caption engine needs to acquire streaming audio data from system audio input (recording) or output (playing sound). If developing with Python, you can use the PyAudio library to get microphone audio input data (cross-platform). Use the PyAudioWPatch library to get system audio output (only applicable to Windows platform).
|
||||
First, your caption engine needs to capture streaming data from the system's audio input (recording) or output (playing sound). If using Python for development, you can use the PyAudio library to obtain microphone audio input data (cross-platform). Use the PyAudioWPatch library to get system audio output (Windows platform only).
|
||||
|
||||
The acquired audio stream data is usually in the form of short audio chunks, and the size of these chunks should be adjusted according to the model. For example, Alibaba Cloud's Gummy model performs better with 0.05-second audio chunks than with 0.2-second audio chunks.
|
||||
Generally, the captured audio stream data consists of short audio chunks, and the size of these chunks should be adjusted according to the model. For example, Alibaba Cloud's Gummy model performs better with 0.05-second audio chunks compared to 0.2-second ones.
|
||||
|
||||
### Audio Processing
|
||||
|
||||
The acquired audio stream may need preprocessing before being converted to text. For instance, Alibaba Cloud's Gummy model can only recognize single-channel audio streams, while the collected audio streams are generally dual-channel, so you need to convert the dual-channel audio stream to a single channel. The conversion of channels can be achieved using methods from the NumPy library.
|
||||
The acquired audio stream may need preprocessing before being converted to text. For instance, Alibaba Cloud's Gummy model can only recognize single-channel audio streams, while the collected audio streams are typically dual-channel, thus requiring conversion from dual-channel to single-channel. Channel conversion can be achieved using methods in the NumPy library.
|
||||
|
||||
You can directly use the audio acquisition and processing modules I've developed (path: `caption-engine/sysaudio`):
|
||||
|
||||
```python
|
||||
if sys.platform == 'win32':
|
||||
from sysaudio.win import AudioStream, mergeStreamChannels
|
||||
elif sys.platform == 'linux':
|
||||
from sysaudio.linux import AudioStream, mergeStreamChannels
|
||||
else:
|
||||
raise NotImplementedError(f"Unsupported platform: {sys.platform}")
|
||||
|
||||
# Create an instance of the audio stream object
|
||||
stream = AudioStream(audio_type)
|
||||
# Open the audio stream
|
||||
stream.openStream()
|
||||
while True: # Loop to read audio data
|
||||
# Read audio data
|
||||
data = stream.stream.read(stream.CHUNK)
|
||||
# Convert dual-channel audio data to single-channel
|
||||
data = mergeStreamChannels(data, stream.CHANNELS)
|
||||
# Call the audio-to-text model
|
||||
# ... ...
|
||||
```
|
||||
You can directly use the audio acquisition (`caption-engine/sysaudio`) and audio processing (`caption-engine/audioprcs`) modules I have developed.
|
||||
|
||||
### Audio to Text Conversion
|
||||
|
||||
Once you have the appropriate audio stream, you can convert it to text. Various models are typically used to achieve this. You can choose the model based on your requirements.
|
||||
After obtaining the appropriate audio stream, you can convert it into text. This is generally done using various models based on your requirements.
|
||||
|
||||
A nearly complete implementation of a caption engine is as follows:
|
||||
|
||||
```python
|
||||
import sys
|
||||
import argparse
|
||||
|
||||
# Import system audio acquisition module
|
||||
if sys.platform == 'win32':
|
||||
from sysaudio.win import AudioStream
|
||||
elif sys.platform == 'darwin':
|
||||
from sysaudio.darwin import AudioStream
|
||||
elif sys.platform == 'linux':
|
||||
from sysaudio.linux import AudioStream
|
||||
else:
|
||||
raise NotImplementedError(f"Unsupported platform: {sys.platform}")
|
||||
|
||||
# Import audio processing functions
|
||||
from audioprcs import mergeChunkChannels
|
||||
# Import audio-to-text module
|
||||
from audio2text import InvalidParameter, GummyTranslator
|
||||
|
||||
|
||||
def convert_audio_to_text(s_lang, t_lang, audio_type, chunk_rate, api_key):
|
||||
# Set standard output to line buffering
|
||||
sys.stdout.reconfigure(line_buffering=True) # type: ignore
|
||||
|
||||
# Create instances for audio acquisition and speech-to-text
|
||||
stream = AudioStream(audio_type, chunk_rate)
|
||||
if t_lang == 'none':
|
||||
gummy = GummyTranslator(stream.RATE, s_lang, None, api_key)
|
||||
else:
|
||||
gummy = GummyTranslator(stream.RATE, s_lang, t_lang, api_key)
|
||||
|
||||
# Start instances
|
||||
stream.openStream()
|
||||
gummy.start()
|
||||
|
||||
while True:
|
||||
try:
|
||||
# Read audio stream data
|
||||
chunk = stream.read_chunk()
|
||||
chunk_mono = mergeChunkChannels(chunk, stream.CHANNELS)
|
||||
try:
|
||||
# Call the model for translation
|
||||
gummy.send_audio_frame(chunk_mono)
|
||||
except InvalidParameter:
|
||||
gummy.start()
|
||||
gummy.send_audio_frame(chunk_mono)
|
||||
except KeyboardInterrupt:
|
||||
stream.closeStream()
|
||||
gummy.stop()
|
||||
break
|
||||
```
|
||||
|
||||
### Caption Translation
|
||||
|
||||
Some speech-to-text models don't provide translation functionality, requiring an additional translation module. This part can use either cloud-based translation APIs or local translation models.
|
||||
|
||||
### Data Transmission
|
||||
|
||||
After obtaining the text for the current audio stream, you need to pass the text to the main program. The caption engine process passes the caption data to the Electron main process through standard output.
|
||||
After obtaining the text of the current audio stream, it needs to be transmitted to the main program. The caption engine process passes the caption data to the Electron main process through standard output.
|
||||
|
||||
The content transmitted must be a JSON string, where the JSON object should include the following parameters:
|
||||
The content transmitted must be a JSON string, where the JSON object must contain the following parameters:
|
||||
|
||||
```typescript
|
||||
export interface CaptionItem {
|
||||
index: number, // Caption sequence number
|
||||
time_s: string, // Start time of the current caption
|
||||
time_t: string, // End time of the current caption
|
||||
time_s: string, // Caption start time
|
||||
time_t: string, // Caption end time
|
||||
text: string, // Caption content
|
||||
translation: string // Caption translation
|
||||
}
|
||||
```
|
||||
|
||||
**It is essential to ensure that every time a caption JSON data is output, the buffer is flushed, ensuring that the string received by the Electron main process each time can be interpreted as a JSON object.**
|
||||
**It is essential to ensure that each time we output caption JSON data, the buffer is flushed, ensuring that the string received by the Electron main process can always be interpreted as a JSON object.**
|
||||
|
||||
If using Python, you can refer to the following method to pass data to the main program:
|
||||
|
||||
@@ -84,7 +123,8 @@ sys.stdout.reconfigure(line_buffering=True)
|
||||
...
|
||||
```
|
||||
|
||||
The code for the data receiving end is as follows:
|
||||
Data receiver code is as follows:
|
||||
|
||||
|
||||
```typescript
|
||||
// src\main\utils\engine.ts
|
||||
@@ -97,7 +137,7 @@ The code for the data receiving end is as follows:
|
||||
const caption = JSON.parse(line);
|
||||
addCaptionLog(caption);
|
||||
} catch (e) {
|
||||
controlWindow.sendErrorMessage('Cannot parse caption engine output as JSON object: ' + e)
|
||||
controlWindow.sendErrorMessage('Unable to parse the output from the caption engine as a JSON object: ' + e)
|
||||
console.error('[ERROR] Error parsing JSON:', e);
|
||||
}
|
||||
}
|
||||
@@ -111,6 +151,6 @@ The code for the data receiving end is as follows:
|
||||
...
|
||||
```
|
||||
|
||||
## Code Reference
|
||||
## Reference Code
|
||||
|
||||
The default caption engine entry point code is located in the `main-gummy.py` file under the `caption-engine` folder of this project. The `src\main\utils\engine.ts` file contains the server-side code for acquiring and processing caption engine data. You can read and understand the implementation details and the complete runtime process of the caption engine as needed.
|
||||
The `main-gummy.py` file under the `caption-engine` folder in this project serves as the entry point for the default caption engine. The `src\main\utils\engine.ts` file contains the server-side code for acquiring and processing data from the caption engine. You can read and understand the implementation details and the complete execution process of the caption engine as needed.
|
||||
|
||||
@@ -1,71 +1,110 @@
|
||||
# キャプションエンジンの説明文書
|
||||
# 字幕エンジンの説明文書
|
||||
|
||||

|
||||
対応バージョン:v0.4.0
|
||||
|
||||
この文書は大規模モデルを使用して翻訳されていますので、内容に正確でない部分があるかもしれません。
|
||||
|
||||
## キャプションエンジンの紹介
|
||||

|
||||
|
||||
キャプションエンジンとは、実際にはサブプログラムであり、システムの音声入力(録音)または出力(音声再生)のストリーミングデータをリアルタイムで取得し、音声をテキストに変換するモデルを呼び出して対応する音声のキャプションを生成します。生成されたキャプションはJSON形式の文字列データに変換され、標準出力を通じてメインプログラムに渡されます(メインプログラムが読み取った文字列がJSONオブジェクトとして正しく解釈できるようにする必要があります)。メインプログラムはキャプションデータを読み取り、解釈し、処理してウィンドウ上に表示します。
|
||||
## 字幕エンジンの紹介
|
||||
|
||||
## キャプションエンジンが必要とする機能
|
||||
所謂字幕エンジンは実際にはサブプログラムであり、システムの音声入力(録音)または出力(音声再生)のストリーミングデータをリアルタイムで取得し、音声からテキストへの変換モデルを使って対応する音声の字幕を生成します。生成された字幕はJSON形式の文字列データに変換され、標準出力を通じてメインプログラムに渡されます(メインプログラムが読み取った文字列が正しいJSONオブジェクトとして解釈されることが保証される必要があります)。メインプログラムは字幕データを読み取り、解釈して処理し、ウィンドウ上に表示します。
|
||||
|
||||
## 字幕エンジンが必要な機能
|
||||
|
||||
### 音声の取得
|
||||
|
||||
まず、あなたのキャプションエンジンはシステムの音声入力(録音)または出力(音声再生)のストリーミングデータを取得する必要があります。Pythonを使用して開発する場合、PyAudioライブラリを使用してマイクからの音声入力データを取得できます(全プラットフォーム対応)。PyAudioWPatchライブラリを使用してシステムの音声出力を取得することができます(Windowsプラットフォームのみ対応)。
|
||||
まず、あなたの字幕エンジンはシステムの音声入力(録音)または出力(音声再生)のストリーミングデータを取得する必要があります。Pythonを使用して開発する場合、PyAudioライブラリを使ってマイクからの音声入力データを取得できます(全プラットフォーム共通)。また、WindowsプラットフォームではPyAudioWPatchライブラリを使ってシステムの音声出力を取得することもできます。
|
||||
|
||||
一般的に取得される音声ストリームデータは、比較的短い時間の音声ブロックで構成されています。モデルに合わせて音声ブロックのサイズを調整する必要があります。例えば、アリババクラウドのGummyモデルでは、0.05秒の音声ブロックを使用した認識精度が0.2秒の音声ブロックよりも優れています。
|
||||
一般的に取得される音声ストリームデータは、比較的短い時間間隔の音声ブロックで構成されています。モデルに合わせて音声ブロックのサイズを調整する必要があります。例えば、アリババクラウドのGummyモデルでは、0.05秒の音声ブロックを使用した認識結果の方が0.2秒の音声ブロックよりも優れています。
|
||||
|
||||
### 音声の処理
|
||||
|
||||
取得した音声ストリームは、テキストに変換する前に前処理を行う必要があるかもしれません。例えば、アリババクラウドのGummyモデルは単一チャンネルの音声ストリームしか認識できませんが、収集された音声ストリームは通常二重チャンネルです。そのため、二重チャンネルの音声ストリームを単一チャンネルに変換する必要があります。チャンネル数の変換はNumPyライブラリのメソッドを使用して行うことができます。
|
||||
取得した音声ストリームは、テキストに変換する前に前処理が必要な場合があります。例えば、アリババクラウドのGummyモデルは単一チャンネルの音声ストリームしか認識できませんが、収集された音声ストリームは通常二重チャンネルであるため、二重チャンネルの音声ストリームを単一チャンネルに変換する必要があります。チャンネル数の変換はNumPyライブラリのメソッドを使って行うことができます。
|
||||
|
||||
既に開発済みの音声取得と音声処理モジュール(パス:`caption-engine/sysaudio`)を使用することもできます:
|
||||
|
||||
```python
|
||||
if sys.platform == 'win32':
|
||||
from sysaudio.win import AudioStream, mergeStreamChannels
|
||||
elif sys.platform == 'linux':
|
||||
from sysaudio.linux import AudioStream, mergeStreamChannels
|
||||
else:
|
||||
raise NotImplementedError(f"サポートされていないプラットフォーム: {sys.platform}")
|
||||
|
||||
# 音声ストリームオブジェクトのインスタンスを作成
|
||||
stream = AudioStream(audio_type)
|
||||
# 音声ストリームを開く
|
||||
stream.openStream()
|
||||
while True: # 音声データを繰り返し読み込む
|
||||
# 音声データを読み込む
|
||||
data = stream.stream.read(stream.CHUNK)
|
||||
# 二重チャンネルの音声データを単一チャンネルに変換
|
||||
data = mergeStreamChannels(data, stream.CHANNELS)
|
||||
# 音声をテキストに変換するモデルを呼び出す
|
||||
# ... ...
|
||||
```
|
||||
あなたは私によって開発された音声の取得(`caption-engine/sysaudio`)と音声の処理(`caption-engine/audioprcs`)モジュールを直接使用することができます。
|
||||
|
||||
### 音声からテキストへの変換
|
||||
|
||||
適切な音声ストリームを得た後、それをテキストに変換することができます。通常、様々なモデルを使用してこの変換を行います。必要に応じてモデルを選択してください。
|
||||
適切な音声ストリームを得た後、それをテキストに変換することができます。通常、様々なモデルを使って音声ストリームをテキストに変換します。必要に応じてモデルを選択することができます。
|
||||
|
||||
ほぼ完全な字幕エンジンの実装例:
|
||||
|
||||
```python
|
||||
import sys
|
||||
import argparse
|
||||
|
||||
# システム音声の取得に関する設定
|
||||
if sys.platform == 'win32':
|
||||
from sysaudio.win import AudioStream
|
||||
elif sys.platform == 'darwin':
|
||||
from sysaudio.darwin import AudioStream
|
||||
elif sys.platform == 'linux':
|
||||
from sysaudio.linux import AudioStream
|
||||
else:
|
||||
raise NotImplementedError(f"Unsupported platform: {sys.platform}")
|
||||
|
||||
# 音声処理関数のインポート
|
||||
from audioprcs import mergeChunkChannels
|
||||
# 音声からテキストへの変換モジュールのインポート
|
||||
from audio2text import InvalidParameter, GummyTranslator
|
||||
|
||||
|
||||
def convert_audio_to_text(s_lang, t_lang, audio_type, chunk_rate, api_key):
|
||||
# 標準出力をラインバッファリングに設定
|
||||
sys.stdout.reconfigure(line_buffering=True) # type: ignore
|
||||
|
||||
# 音声の取得と音声からテキストへの変換のインスタンスを作成
|
||||
stream = AudioStream(audio_type, chunk_rate)
|
||||
if t_lang == 'none':
|
||||
gummy = GummyTranslator(stream.RATE, s_lang, None, api_key)
|
||||
else:
|
||||
gummy = GummyTranslator(stream.RATE, s_lang, t_lang, api_key)
|
||||
|
||||
# インスタンスを開始
|
||||
stream.openStream()
|
||||
gummy.start()
|
||||
|
||||
while True:
|
||||
try:
|
||||
# 音声ストリームデータを読み込む
|
||||
chunk = stream.read_chunk()
|
||||
chunk_mono = mergeChunkChannels(chunk, stream.CHANNELS)
|
||||
try:
|
||||
# モデルを使って翻訳を行う
|
||||
gummy.send_audio_frame(chunk_mono)
|
||||
except InvalidParameter:
|
||||
gummy.start()
|
||||
gummy.send_audio_frame(chunk_mono)
|
||||
except KeyboardInterrupt:
|
||||
stream.closeStream()
|
||||
gummy.stop()
|
||||
break
|
||||
```
|
||||
|
||||
### 字幕翻訳
|
||||
|
||||
音声認識モデルによっては翻訳機能を提供していないため、別途翻訳モジュールを追加する必要があります。この部分にはクラウドベースの翻訳APIを使用することも、ローカルの翻訳モデルを使用することも可能です。
|
||||
|
||||
### データの伝送
|
||||
|
||||
現在の音声ストリームのテキストを取得したら、それをメインプログラムに伝送する必要があります。キャプションエンジンプロセスは標準出力を通じてキャプションデータをElectronのメインプロセスに伝送します。
|
||||
現在の音声ストリームのテキストを得たら、それをメインプログラムに渡す必要があります。字幕エンジンプロセスは標準出力を通じて電子メール主プロセスに字幕データを渡します。
|
||||
|
||||
伝送する内容はJSON文字列でなければならず、JSONオブジェクトには以下のパラメータを含める必要があります:
|
||||
渡す内容はJSON文字列でなければなりません。JSONオブジェクトには以下のパラメータを含める必要があります:
|
||||
|
||||
```typescript
|
||||
export interface CaptionItem {
|
||||
index: number, // キャプション番号
|
||||
time_s: string, // 現在のキャプションの開始時間
|
||||
time_t: string, // 現在のキャプションの終了時間
|
||||
text: string, // キャプションの内容
|
||||
translation: string // キャプションの翻訳
|
||||
index: number, // 字幕番号
|
||||
time_s: string, // 現在の字幕開始時間
|
||||
time_t: string, // 現在の字幕終了時間
|
||||
text: string, // 字幕内容
|
||||
translation: string // 字幕翻訳
|
||||
}
|
||||
```
|
||||
|
||||
**注意:キャプションJSONデータを出力するたびに必ずバッファをフラッシュし、Electronのメインプロセスが受け取る文字列が常にJSONオブジェクトとして解釈できるようにする必要があります。**
|
||||
**必ず、字幕JSONデータを出力するたびにバッファをフラッシュし、electron主プロセスが受け取る文字列が常にJSONオブジェクトとして解釈できるようにする必要があります。**
|
||||
|
||||
Pythonを使用する場合、以下のようにデータをメインプログラムに伝送できます:
|
||||
Python言語を使用する場合、以下の方法でデータをメインプログラムに渡すことができます:
|
||||
|
||||
```python
|
||||
# caption-engine\main-gummy.py
|
||||
@@ -75,44 +114,15 @@ sys.stdout.reconfigure(line_buffering=True)
|
||||
...
|
||||
def send_to_node(self, data):
|
||||
"""
|
||||
データをNode.jsプロセスに送信
|
||||
Node.jsプロセスにデータを送信する
|
||||
"""
|
||||
try:
|
||||
json_data = json.dumps(data) + '\n'
|
||||
sys.stdout.write(json_data)
|
||||
sys.stdout.flush()
|
||||
except Exception as e:
|
||||
print(f"Node.jsへのデータ送信エラー: {e}", file=sys.stderr)
|
||||
print(f"Error sending data to Node.js: {e}", file=sys.stderr)
|
||||
...
|
||||
```
|
||||
|
||||
データ受信側のコードは以下の通りです:
|
||||
|
||||
```typescript
|
||||
// src\main\utils\engine.ts
|
||||
...
|
||||
this.process.stdout.on('data', (data) => {
|
||||
const lines = data.toString().split('\n');
|
||||
lines.forEach((line: string) => {
|
||||
if (line.trim()) {
|
||||
try {
|
||||
const caption = JSON.parse(line);
|
||||
addCaptionLog(caption);
|
||||
} catch (e) {
|
||||
controlWindow.sendErrorMessage('キャプションエンジンの出力内容がJSONオブジェクトとして解析できません: ' + e)
|
||||
console.error('[ERROR] JSON解析エラー:', e);
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
this.process.stderr.on('data', (data) => {
|
||||
controlWindow.sendErrorMessage('キャプションエンジンエラー: ' + data)
|
||||
console.error(`[ERROR] サブプロセスエラー: ${data}`);
|
||||
});
|
||||
...
|
||||
```
|
||||
|
||||
## 参考コード
|
||||
|
||||
本プロジェクトの `caption-engine` フォルダにある `main-gummy.py` ファイルは、デフォルトのキャプションエンジンのエントリポイントコードです。`src\main\utils\engine.ts` はサーバーサイドでキャプションエンジンのデータを取得および処理するためのコードです。必要に応じて、キャプションエンジンの実装詳細と完全な実行プロセスを理解するために読み込むことができます。
|
||||
データ受信側のコードは
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
# 字幕引擎说明文档
|
||||
|
||||
对应版本:v0.4.0
|
||||
|
||||

|
||||
|
||||
## 字幕引擎介绍
|
||||
@@ -18,33 +20,70 @@
|
||||
|
||||
获取到的音频流在转文字之前可能需要进行预处理。比如阿里云的 Gummy 模型只能识别单通道的音频流,而收集的音频流一般是双通道的,因此要将双通道音频流转换为单通道。通道数的转换可以使用 NumPy 库中的方法实现。
|
||||
|
||||
你可以直接使用我开发好的音频获取和音频处理模块(路径:`caption-engine/sysaudio`):
|
||||
|
||||
```python
|
||||
if sys.platform == 'win32':
|
||||
from sysaudio.win import AudioStream, mergeStreamChannels
|
||||
elif sys.platform == 'linux':
|
||||
from sysaudio.linux import AudioStream, mergeStreamChannels
|
||||
else:
|
||||
raise NotImplementedError(f"Unsupported platform: {sys.platform}")
|
||||
|
||||
# 创建音频流对象实例
|
||||
stream = AudioStream(audio_type)
|
||||
# 打开音频流
|
||||
stream.openStream()
|
||||
while True: # 循环读取音频数据
|
||||
# 读取音频数据
|
||||
data = stream.stream.read(stream.CHUNK)
|
||||
# 将双通道音频数据转换为单通道
|
||||
data = mergeStreamChannels(data, stream.CHANNELS)
|
||||
# 调用音频转文字模型
|
||||
# ... ...
|
||||
```
|
||||
你可以直接使用我开发好的音频获取(`caption-engine/sysaudio`)和音频处理(`caption-engine/audioprcs`)模块。
|
||||
|
||||
### 音频转文字
|
||||
|
||||
在得到了合适的音频流后,就可以将音频流转换为文字了。一般使用各种模型来实现音频流转文字。可根据需求自行选择模型。
|
||||
|
||||
一个接近完整的字幕引擎实例如下:
|
||||
|
||||
```python
|
||||
import sys
|
||||
import argparse
|
||||
|
||||
# 引入系统音频获取勒
|
||||
if sys.platform == 'win32':
|
||||
from sysaudio.win import AudioStream
|
||||
elif sys.platform == 'darwin':
|
||||
from sysaudio.darwin import AudioStream
|
||||
elif sys.platform == 'linux':
|
||||
from sysaudio.linux import AudioStream
|
||||
else:
|
||||
raise NotImplementedError(f"Unsupported platform: {sys.platform}")
|
||||
|
||||
# 引入音频处理函数
|
||||
from audioprcs import mergeChunkChannels
|
||||
# 引入音频转文本模块
|
||||
from audio2text import InvalidParameter, GummyTranslator
|
||||
|
||||
|
||||
def convert_audio_to_text(s_lang, t_lang, audio_type, chunk_rate, api_key):
|
||||
# 设置标准输出为行缓冲
|
||||
sys.stdout.reconfigure(line_buffering=True) # type: ignore
|
||||
|
||||
# 创建音频获取和语音转文字实例
|
||||
stream = AudioStream(audio_type, chunk_rate)
|
||||
if t_lang == 'none':
|
||||
gummy = GummyTranslator(stream.RATE, s_lang, None, api_key)
|
||||
else:
|
||||
gummy = GummyTranslator(stream.RATE, s_lang, t_lang, api_key)
|
||||
|
||||
# 启动实例
|
||||
stream.openStream()
|
||||
gummy.start()
|
||||
|
||||
while True:
|
||||
try:
|
||||
# 读取音频流数据
|
||||
chunk = stream.read_chunk()
|
||||
chunk_mono = mergeChunkChannels(chunk, stream.CHANNELS)
|
||||
try:
|
||||
# 调用模型进行翻译
|
||||
gummy.send_audio_frame(chunk_mono)
|
||||
except InvalidParameter:
|
||||
gummy.start()
|
||||
gummy.send_audio_frame(chunk_mono)
|
||||
except KeyboardInterrupt:
|
||||
stream.closeStream()
|
||||
gummy.stop()
|
||||
break
|
||||
```
|
||||
|
||||
### 字幕翻译
|
||||
|
||||
有的语音转文字模型并不提供翻译,需要再添加一个翻译模块。这部分可以使用云端翻译 API 也可以使用本地翻译模型。
|
||||
|
||||
### 数据传递
|
||||
|
||||
在获取到当前音频流的文字后,需要将文字传递给主程序。字幕引擎进程通过标准输出将字幕数据传递给 electron 主进程。
|
||||
|
||||
|
Before Width: | Height: | Size: 61 KiB After Width: | Height: | Size: 105 KiB |
|
Before Width: | Height: | Size: 66 KiB After Width: | Height: | Size: 132 KiB |
|
Before Width: | Height: | Size: 68 KiB After Width: | Height: | Size: 111 KiB |
BIN
docs/img/03.png
Normal file
|
After Width: | Height: | Size: 152 KiB |
BIN
docs/img/04.png
Normal file
|
After Width: | Height: | Size: 172 KiB |
BIN
docs/img/05.png
Normal file
|
After Width: | Height: | Size: 26 KiB |
@@ -1,34 +1,68 @@
|
||||
# Auto Caption User Manual
|
||||
|
||||
Corresponding Version: v0.2.0
|
||||
Corresponding Version: v0.4.0
|
||||
|
||||
## Software Introduction
|
||||
|
||||
Auto Caption is a cross-platform caption display software that can real-time capture system audio input (recording) or output (playback) streaming data and use an audio-to-text model to generate captions for the corresponding audio. The default caption engine provided by the software (using Alibaba Cloud Gummy model) supports recognition and translation in nine languages (Chinese, English, Japanese, Korean, German, French, Russian, Spanish, Italian).
|
||||
|
||||
Currently, the default caption engine only has full functionality on the Windows platform. On the Linux platform, it can only generate captions for audio input (microphone) and does not support generating captions for audio output (playback).
|
||||
Currently, the default caption engine of the software only has full functionality on Windows and macOS platforms. Additional configuration is required to capture system audio output on macOS.
|
||||
|
||||
On Linux platforms, it can only generate captions for audio input (microphone), and currently does not support generating captions for audio output (playback).
|
||||
|
||||

|
||||
|
||||
### Software Limitations
|
||||
|
||||
To use the default caption service, you need to obtain an API KEY from Alibaba Cloud.
|
||||
To use the Gummy caption engine, you need to obtain an API KEY from Alibaba Cloud.
|
||||
|
||||
Additional configuration is required to capture audio output on macOS platform.
|
||||
|
||||
The software is built using Electron, so the software size is inevitably large.
|
||||
|
||||
## Preparation for Using Gummy Engine
|
||||
|
||||
To use the default caption engine provided by the software (Alibaba Cloud Gummy), you need to obtain an API KEY from the Alibaba Cloud Bailian platform. Then add the API KEY to the software settings or configure it in environment variables (only Windows platform supports reading API KEY from environment variables).
|
||||
|
||||
**The international version of Alibaba Cloud services does not provide the Gummy model, so non-Chinese users currently cannot use the default caption engine.**
|
||||
|
||||
Alibaba Cloud provides detailed tutorials for this part, which can be referenced:
|
||||
|
||||
- [Obtaining API KEY (Chinese)](https://help.aliyun.com/zh/model-studio/get-api-key)
|
||||
- [Configuring API Key through Environment Variables (Chinese)](https://help.aliyun.com/zh/model-studio/configure-api-key-through-environment-variables)
|
||||
|
||||
## Preparation for Using Vosk Engine
|
||||
|
||||
To use the Vosk local caption engine, first download your required model from the [Vosk Models](https://alphacephei.com/vosk/models) page. Then extract the downloaded model package locally and add the corresponding model folder path to the software settings. Currently, the Vosk caption engine does not support translated caption content.
|
||||
|
||||

|
||||
|
||||
## Capturing System Audio Output on macOS
|
||||
|
||||
> Based on the [Setup Multi-Output Device](https://github.com/ExistentialAudio/BlackHole/wiki/Multi-Output-Device) tutorial
|
||||
|
||||
The caption engine cannot directly capture system audio output on macOS platform and requires additional driver installation. The current caption engine uses [BlackHole](https://github.com/ExistentialAudio/BlackHole). First open Terminal and execute one of the following commands (recommended to choose the first one):
|
||||
|
||||
```bash
|
||||
brew install blackhole-2ch
|
||||
brew install blackhole-16ch
|
||||
brew install blackhole-64ch
|
||||
```
|
||||
|
||||

|
||||
|
||||
After installation completes, open `Audio MIDI Setup` (searchable via `cmd + space`). Check if BlackHole appears in the device list - if not, restart your computer.
|
||||
|
||||

|
||||
|
||||
Once BlackHole is confirmed installed, in the `Audio MIDI Setup` page, click the plus (+) button at bottom left and select "Create Multi-Output Device". Include both BlackHole and your desired audio output destination in the outputs. Finally, set this multi-output device as your default audio output device.
|
||||
|
||||

|
||||
|
||||
Now the caption engine can capture system audio output and generate captions.
|
||||
|
||||
## Software Usage
|
||||
|
||||
### Preparing the Alibaba Cloud Model Studio API KEY
|
||||
|
||||
To use the default caption engine (Alibaba Cloud Gummy), you need to obtain an API KEY from the Alibaba Cloud Model Studio and configure it in your local environment variables.
|
||||
|
||||
**The international version of Alibaba Cloud does not provide the Gummy model, so non-Chinese users currently cannot use the default caption engine. I am trying to develop a new local caption engine to ensure that all users have access to a default caption engine.**
|
||||
|
||||
Alibaba Cloud provides detailed tutorials for this:
|
||||
|
||||
- [Obtain API KEY (Chinese)](https://help.aliyun.com/zh/model-studio/get-api-key)
|
||||
- [Configure API Key in Environment Variables (Chinese)](https://help.aliyun.com/zh/model-studio/configure-api-key-through-environment-variables)
|
||||
|
||||
### Modifying Settings
|
||||
|
||||
Caption settings can be divided into three categories: general settings, caption engine settings, and caption style settings. Note that changes to general settings take effect immediately. For the other two categories, after making changes, you need to click the "Apply" option in the upper right corner of the corresponding settings module for the changes to take effect. If you click "Cancel Changes," the current modifications will not be saved and will revert to the previous state.
|
||||
@@ -49,9 +83,9 @@ In the caption control window, you can see the records of all collected captions
|
||||
|
||||
## Caption Engine
|
||||
|
||||
The so-called caption engine is actually a subprocess that real-time captures system audio input (recording) or output (playback) streaming data and uses an audio-to-text model to generate captions for the corresponding audio. The generated captions are output as JSON data converted to strings and returned to the main program. The main program reads the caption data, processes it, and displays it in the window.
|
||||
The so-called caption engine is essentially a subprogram that captures real-time streaming data from system audio input (recording) or output (playback), and invokes speech-to-text models to generate corresponding captions. The generated captions are converted into JSON-formatted strings and passed to the main program through standard output. The main program reads the caption data, processes it, and displays it in the window.
|
||||
|
||||
The software provides a default caption engine. If you need other caption engines, you can call them by enabling the custom engine option (other engines need to be developed specifically for this software). The engine path is the path to the custom caption engine on your computer, and the engine command is the runtime parameters for the custom caption engine, which need to be filled out according to the rules of the specific caption engine.
|
||||
The software provides two default caption engines. If you need other caption engines, you can invoke them by enabling the custom engine option (other engines need to be specifically developed for this software). The engine path refers to the location of the custom caption engine on your computer, while the engine command represents the runtime parameters of the custom caption engine, which should be configured according to the rules of that particular caption engine.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Auto Caption ユーザーマニュアル
|
||||
|
||||
対応バージョン:v0.2.0
|
||||
対応バージョン:v0.4.0
|
||||
|
||||
この文書は大規模モデルを使用して翻訳されていますので、内容に正確でない部分があるかもしれません。
|
||||
|
||||
@@ -8,28 +8,63 @@
|
||||
|
||||
Auto Caption は、クロスプラットフォームの字幕表示ソフトウェアで、システムの音声入力(録音)または出力(音声再生)のストリーミングデータをリアルタイムで取得し、音声からテキストに変換するモデルを利用して対応する音声の字幕を生成します。このソフトウェアが提供するデフォルトの字幕エンジン(アリババクラウド Gummy モデルを使用)は、9つの言語(中国語、英語、日本語、韓国語、ドイツ語、フランス語、ロシア語、スペイン語、イタリア語)の認識と翻訳をサポートしています。
|
||||
|
||||
現在、デフォルトの字幕エンジンは Windows プラットフォームでのみ完全な機能を利用できます。Linux プラットフォームでは、音声入力(マイク)からの字幕生成のみがサポートされており、音声出力(音声再生)からの字幕生成はまだサポートされていません。
|
||||
現在、ソフトウェアのデフォルト字幕エンジンは Windows と macOS プラットフォームでのみ完全な機能を有しています。macOS でシステムオーディオ出力を取得するには追加の設定が必要です。
|
||||
|
||||
Linux プラットフォームでは、オーディオ入力(マイク)からの字幕生成のみ可能で、現在オーディオ出力(再生音)からの字幕生成はサポートしていません。
|
||||
|
||||

|
||||
|
||||
### ソフトウェアの欠点
|
||||
|
||||
デフォルトの字幕サービスを使用するには、アリババクラウドの API KEY を取得する必要があります。
|
||||
Gummy 字幕エンジンを使用するには、アリババクラウドの API KEY を取得する必要があります。
|
||||
|
||||
macOS プラットフォームでオーディオ出力を取得するには追加の設定が必要です。
|
||||
|
||||
ソフトウェアは Electron で構築されているため、そのサイズは避けられないほど大きいです。
|
||||
|
||||
## ソフトウェアの使用方法
|
||||
## Gummyエンジン使用前の準備
|
||||
|
||||
### アリババクラウド百炼プラットフォームの API KEY の準備
|
||||
ソフトウェアが提供するデフォルトの字幕エンジン(Alibaba Cloud Gummy)を使用するには、Alibaba Cloud百煉プラットフォームからAPI KEYを取得する必要があります。その後、API KEYをソフトウェア設定に追加するか、環境変数に設定します(Windowsプラットフォームのみ環境変数からのAPI KEY読み取りをサポート)。
|
||||
|
||||
ソフトウェアが提供するデフォルトの字幕エンジン(アリババクラウド Gummy)を使用するには、アリババクラウド百炼プラットフォームから API KEY を取得し、ローカル環境変数に設定する必要があります。
|
||||
**Alibaba Cloudの国際版サービスではGummyモデルを提供していないため、現在中国以外のユーザーはデフォルトの字幕エンジンを使用できません。**
|
||||
|
||||
**アリババクラウドの国際版には Gummy モデルが提供されていないため、中国以外のユーザーは現在、デフォルトの字幕エンジンを使用できません。すべてのユーザーが利用できるように、新しいローカルの字幕エンジンを開発中です。**
|
||||
|
||||
アリババクラウドは詳細なチュートリアルを提供していますので、以下のリンクを参照してください:
|
||||
この部分についてAlibaba Cloudは詳細なチュートリアルを提供しており、以下を参照できます:
|
||||
|
||||
- [API KEY の取得(中国語)](https://help.aliyun.com/zh/model-studio/get-api-key)
|
||||
- [環境変数を通じて API Key を設定する(中国語)](https://help.aliyun.com/zh/model-studio/configure-api-key-through-environment-variables)
|
||||
- [環境変数を通じて API Key を設定(中国語)](https://help.aliyun.com/zh/model-studio/configure-api-key-through-environment-variables)
|
||||
|
||||
## Voskエンジン使用前の準備
|
||||
|
||||
Voskローカル字幕エンジンを使用するには、まず[Vosk Models](https://alphacephei.com/vosk/models)ページから必要なモデルをダウンロードしてください。その後、ダウンロードしたモデルパッケージをローカルに解凍し、対応するモデルフォルダのパスをソフトウェア設定に追加します。現在、Vosk字幕エンジンは字幕の翻訳をサポートしていません。
|
||||
|
||||

|
||||
|
||||
## macOS でのシステムオーディオ出力の取得方法
|
||||
|
||||
> [マルチ出力デバイスの設定](https://github.com/ExistentialAudio/BlackHole/wiki/Multi-Output-Device) チュートリアルに基づいて作成
|
||||
|
||||
|
||||
字幕エンジンは macOS プラットフォームで直接システムオーディオ出力を取得できず、追加のドライバーインストールが必要です。現在の字幕エンジンでは [BlackHole](https://github.com/ExistentialAudio/BlackHole) を使用しています。まずターミナルを開き、以下のいずれかのコマンドを実行してください(最初のオプションを推奨します):
|
||||
|
||||
```bash
|
||||
brew install blackhole-2ch
|
||||
brew install blackhole-16ch
|
||||
brew install blackhole-64ch
|
||||
```
|
||||
|
||||

|
||||
|
||||
インストール完了後、`オーディオMIDI設定`(`cmd + space`で検索可能)を開きます。デバイスリストにBlackHoleが表示されているか確認してください - 表示されていない場合はコンピュータを再起動してください。
|
||||
|
||||

|
||||
|
||||
BlackHoleのインストールが確認できたら、`オーディオ MIDI 設定`ページで左下のプラス(+)ボタンをクリックし、「マルチ出力デバイスを作成」を選択します。出力に BlackHole と希望するオーディオ出力先の両方を含めてください。最後に、このマルチ出力デバイスをデフォルトのオーディオ出力デバイスに設定します。
|
||||
|
||||

|
||||
|
||||
これで字幕エンジンがシステムオーディオ出力をキャプチャし、字幕を生成できるようになります。
|
||||
|
||||
## ソフトウェアの使い方
|
||||
|
||||
### 設定の変更
|
||||
|
||||
@@ -51,9 +86,9 @@ Auto Caption は、クロスプラットフォームの字幕表示ソフトウ
|
||||
|
||||
## 字幕エンジン
|
||||
|
||||
字幕エンジンとは、実際にはサブプログラムであり、システムの音声入力(録音)または出力(音声再生)のストリーミングデータをリアルタイムで取得し、音声からテキストに変換するモデルを利用して対応する音声の字幕を生成します。生成された字幕はIPC経由で文字列に変換されたJSONデータとして出力され、メインプログラムに返されます。メインプログラムは字幕データを読み取り、処理してウィンドウ上に表示します。
|
||||
字幕エンジンとは、システムのオーディオ入力(録音)または出力(再生音)のストリーミングデータをリアルタイムで取得し、音声テキスト変換モデルを呼び出して対応する字幕を生成するサブプログラムです。生成された字幕は JSON 形式の文字列に変換され、標準出力を通じてメインプログラムに渡されます。メインプログラムは字幕データを読み取り、処理した後、ウィンドウに表示します。
|
||||
|
||||
ソフトウェアはデフォルトの字幕エンジンを提供しており、他の字幕エンジンが必要な場合は、カスタムエンジンオプションを開いて他の字幕エンジンを呼び出すことができます(他のエンジンはこのソフトウェアに対して開発する必要があります)。エンジンパスは、あなたのコンピュータ上のカスタム字幕エンジンのパスであり、エンジンコマンドはカスタム字幕エンジンの実行パラメータです。これらの部分は、その字幕エンジンの規則に従って記入する必要があります。
|
||||
ソフトウェアには2つのデフォルトの字幕エンジンが用意されています。他の字幕エンジンが必要な場合、カスタムエンジンオプションを有効にすることで呼び出すことができます(他のエンジンはこのソフトウェア向けに特別に開発する必要があります)。エンジンパスはコンピュータ上のカスタム字幕エンジンの場所を指し、エンジンコマンドはカスタム字幕エンジンの実行パラメータを表します。これらは該当する字幕エンジンの規則に従って設定する必要があります。
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -1,28 +1,30 @@
|
||||
# Auto Caption 用户手册
|
||||
|
||||
对应版本:v0.2.0
|
||||
对应版本:v0.4.0
|
||||
|
||||
## 软件简介
|
||||
|
||||
Auto Caption 是一个跨平台的字幕显示软件,能够实时获取系统音频输入(录音)或输出(播放声音)的流式数据,并调用音频转文字的模型生成对应音频的字幕。软件提供的默认字幕引擎(使用阿里云 Gummy 模型)支持九种语言(中、英、日、韩、德、法、俄、西、意)的识别与翻译。
|
||||
|
||||
目前软件默认字幕引擎只有在 Windows 平台下才拥有完整功能。在 Linux 平台下只能生成音频输入(麦克风)的字幕,暂不支持音频输出(播放声音)的字幕生成。
|
||||
目前软件默认字幕引擎只有在 Windows 和 macOS 平台下才拥有完整功能,在 macOS 要获取系统音频输出需要额外配置。
|
||||
|
||||
在 Linux 平台下只能生成音频输入(麦克风)的字幕,暂不支持音频输出(播放声音)的字幕生成。
|
||||
|
||||

|
||||
|
||||
### 软件缺点
|
||||
|
||||
要使用默认字幕服务需要获取阿里云的 API KEY。
|
||||
要使用默认的 Gummy 字幕引擎需要获取阿里云的 API KEY。
|
||||
|
||||
在 macOS 平台获取音频输出需要额外配置。
|
||||
|
||||
软件使用 Electron 构建,因此软件体积不可避免的较大。
|
||||
|
||||
## 软件使用
|
||||
## Gummy 引擎使用前准备
|
||||
|
||||
### 准备阿里云百炼平台 API KEY
|
||||
要使用软件提供的默认字幕引擎(阿里云 Gummy),需要从阿里云百炼平台获取 API KEY,然后将 API KEY 添加到软件设置中或者配置到环境变量中(仅 Windows 平台支持读取环境变量中的 API KEY)。
|
||||
|
||||
要使用软件提供的默认字幕引擎(阿里云 Gummy),需要从阿里云百炼平台获取 API KEY 并在本机环境变量中配置。
|
||||
|
||||
**国际版的阿里云服务并没有提供 Gummy 模型,因此目前非中国用户无法使用默认字幕引擎。我正在开发新的本地字幕引擎,以确保所有用户都有默认字幕引擎可以使用。**
|
||||
**国际版的阿里云服务并没有提供 Gummy 模型,因此目前非中国用户无法使用默认字幕引擎。**
|
||||
|
||||
这部分阿里云提供了详细的教程,可参考:
|
||||
|
||||
@@ -30,6 +32,38 @@ Auto Caption 是一个跨平台的字幕显示软件,能够实时获取系统
|
||||
|
||||
- [将 API Key 配置到环境变量](https://help.aliyun.com/zh/model-studio/configure-api-key-through-environment-variables)
|
||||
|
||||
## Vosk 引擎使用前准备
|
||||
|
||||
如果要使用 Vosk 本地字幕引擎,首先需要在 [Vosk Models](https://alphacephei.com/vosk/models) 页面下载你需要的模型。然后将下载的模型安装包解压到本地,并将对应的模型文件夹的路径添加到软件的设置中。目前 Vosk 字幕引擎还不支持翻译字幕内容。
|
||||
|
||||

|
||||
|
||||
## macOS 获取系统音频输出
|
||||
|
||||
> 基于 [Setup Multi-Output Device](https://github.com/ExistentialAudio/BlackHole/wiki/Multi-Output-Device) 教程编写
|
||||
|
||||
字幕引擎无法在 macOS 平台直接获取系统的音频输出,需要安装额外的驱动。目前字幕引擎采用的是 [BlackHole](https://github.com/ExistentialAudio/BlackHole)。首先打开终端,执行以下命令中的其中一个(建议选择第一个):
|
||||
|
||||
```bash
|
||||
brew install blackhole-2ch
|
||||
brew install blackhole-16ch
|
||||
brew install blackhole-64ch
|
||||
```
|
||||
|
||||

|
||||
|
||||
安装完成后打开 `音频 MIDI 设置`(`cmd + space` 打开搜索,可以搜索到)。观察设备列表中是否有 BlackHole 设备,如果没有需要重启电脑。
|
||||
|
||||

|
||||
|
||||
在确定安装好 BlackHole 设备后,在 `音频 MIDI 设置` 页面,点击左下角的加号,选择“创建多输出设备”。在输出中包含 BlackHole 和你想要的音频输出目标。最后将该多输出设备设置为默认音频输出设备。
|
||||
|
||||

|
||||
|
||||
现在字幕引擎就能捕获系统的音频输出并生成字幕了。
|
||||
|
||||
## 软件使用
|
||||
|
||||
### 修改设置
|
||||
|
||||
字幕设置可以分为三类:通用设置、字幕引擎设置、字幕样式设置。需要注意的是,修改通用设置是立即生效的。但是对于其他两类设置,修改后需要点击对应设置模块右上角的“应用”选项,更改才会真正生效。如果点击“取消更改”那么当前修改将不会被保存,而是回退到上次修改的状态。
|
||||
@@ -50,9 +84,9 @@ Auto Caption 是一个跨平台的字幕显示软件,能够实时获取系统
|
||||
|
||||
## 字幕引擎
|
||||
|
||||
所谓的字幕引擎实际上是一个子程序,它会实时获取系统音频输入(录音)或输出(播放声音)的流式数据,并调用音频转文字的模型生成对应音频的字幕。生成的字幕通过 IPC 输出为转换为字符串的 JSON 数据,并返回给主程序。主程序读取字幕数据,处理后显示在窗口上。
|
||||
所谓的字幕引擎实际上是一个子程序,它会实时获取系统音频输入(录音)或输出(播放声音)的流式数据,并调用音频转文字的模型生成对应音频的字幕。生成的字幕通过转换为字符串的 JSON 数据,并通过标准输出传递给主程序。主程序读取字幕数据,处理后显示在窗口上。
|
||||
|
||||
软件提供了一个默认的字幕引擎,如果你需要其他的字幕引擎,可以通过打开自定义引擎选项来调用其他字幕引擎(其他引擎需要针对该软件进行开发)。其中引擎路径是自定义字幕引擎在你的电脑上的路径,引擎指令是自定义字幕引擎的运行参数,这部分需要按该字幕引擎的规则进行填写。
|
||||
软件提供了两个默认的字幕引擎,如果你需要其他的字幕引擎,可以通过打开自定义引擎选项来调用其他字幕引擎(其他引擎需要针对该软件进行开发)。其中引擎路径是自定义字幕引擎在你的电脑上的路径,引擎指令是自定义字幕引擎的运行参数,这部分需要按该字幕引擎的规则进行填写。
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -6,17 +6,28 @@ files:
|
||||
- '!**/.vscode/*'
|
||||
- '!src/*'
|
||||
- '!electron.vite.config.{js,ts,mjs,cjs}'
|
||||
- '!{.eslintcache,eslint.config.mjs,.prettierignore,.prettierrc.yaml,dev-app-update.yml,CHANGELOG.md,README.md}'
|
||||
- '!{.eslintcache,eslint.config.mjs,.prettierignore,.prettierrc.yaml,dev-app-update.yml,CHANGELOG.md}'
|
||||
- '!{LICENSE,README.md,README_en.md,README_ja.md}'
|
||||
- '!{.env,.env.*,.npmrc,pnpm-lock.yaml}'
|
||||
- '!{tsconfig.json,tsconfig.node.json,tsconfig.web.json}'
|
||||
- '!caption-engine/*'
|
||||
- '!engine-test/*'
|
||||
- '!docs/*'
|
||||
- '!assets/*'
|
||||
extraResources:
|
||||
from: ./caption-engine/dist/main-gummy.exe
|
||||
to: ./caption-engine/dist/main-gummy.exe
|
||||
asarUnpack:
|
||||
- resources/**
|
||||
# For Windows
|
||||
- from: ./caption-engine/dist/main-gummy.exe
|
||||
to: ./caption-engine/main-gummy.exe
|
||||
- from: ./caption-engine/dist/main-vosk.exe
|
||||
to: ./caption-engine/main-vosk.exe
|
||||
# For macOS and Linux
|
||||
# - from: ./caption-engine/dist/main-gummy
|
||||
# to: ./caption-engine/main-gummy
|
||||
# - from: ./caption-engine/dist/main-vosk
|
||||
# to: ./caption-engine/main-vosk
|
||||
win:
|
||||
executableName: auto-caption
|
||||
icon: resources/icon.png
|
||||
icon: build/icon.png
|
||||
nsis:
|
||||
artifactName: ${name}-${version}-setup.${ext}
|
||||
shortcutName: ${productName}
|
||||
|
||||
221
engine-test/gummy.ipynb
Normal file
@@ -0,0 +1,221 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from dashscope.audio.asr import * # type: ignore\n",
|
||||
"import pyaudiowpatch as pyaudio\n",
|
||||
"import numpy as np\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def getDefaultSpeakers(mic: pyaudio.PyAudio, info = True):\n",
|
||||
" \"\"\"\n",
|
||||
" 获取默认的系统音频输出的回环设备\n",
|
||||
" Args:\n",
|
||||
" mic (pyaudio.PyAudio): pyaudio对象\n",
|
||||
" info (bool, optional): 是否打印设备信息. Defaults to True.\n",
|
||||
"\n",
|
||||
" Returns:\n",
|
||||
" dict: 统音频输出的回环设备\n",
|
||||
" \"\"\"\n",
|
||||
" try:\n",
|
||||
" WASAPI_info = mic.get_host_api_info_by_type(pyaudio.paWASAPI)\n",
|
||||
" except OSError:\n",
|
||||
" print(\"Looks like WASAPI is not available on the system. Exiting...\")\n",
|
||||
" exit()\n",
|
||||
"\n",
|
||||
" default_speaker = mic.get_device_info_by_index(WASAPI_info[\"defaultOutputDevice\"])\n",
|
||||
" if(info): print(\"wasapi_info:\\n\", WASAPI_info, \"\\n\")\n",
|
||||
" if(info): print(\"default_speaker:\\n\", default_speaker, \"\\n\")\n",
|
||||
"\n",
|
||||
" if not default_speaker[\"isLoopbackDevice\"]:\n",
|
||||
" for loopback in mic.get_loopback_device_info_generator():\n",
|
||||
" if default_speaker[\"name\"] in loopback[\"name\"]:\n",
|
||||
" default_speaker = loopback\n",
|
||||
" if(info): print(\"Using loopback device:\\n\", default_speaker, \"\\n\")\n",
|
||||
" break\n",
|
||||
" else:\n",
|
||||
" print(\"Default loopback output device not found.\")\n",
|
||||
" print(\"Run `python -m pyaudiowpatch` to check available devices.\")\n",
|
||||
" print(\"Exiting...\")\n",
|
||||
" exit()\n",
|
||||
" \n",
|
||||
" if(info): print(f\"Recording Device: #{default_speaker['index']} {default_speaker['name']}\")\n",
|
||||
" return default_speaker\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class Callback(TranslationRecognizerCallback):\n",
|
||||
" \"\"\"\n",
|
||||
" 语音大模型流式传输回调对象\n",
|
||||
" \"\"\"\n",
|
||||
" def __init__(self):\n",
|
||||
" super().__init__()\n",
|
||||
" self.usage = 0\n",
|
||||
" self.sentences = []\n",
|
||||
" self.translations = []\n",
|
||||
" \n",
|
||||
" def on_open(self) -> None:\n",
|
||||
" print(\"\\n流式翻译开始...\\n\")\n",
|
||||
"\n",
|
||||
" def on_close(self) -> None:\n",
|
||||
" print(f\"\\nTokens消耗:{self.usage}\")\n",
|
||||
" print(f\"流式翻译结束...\\n\")\n",
|
||||
" for i in range(len(self.sentences)):\n",
|
||||
" print(f\"\\n{self.sentences[i]}\\n{self.translations[i]}\\n\")\n",
|
||||
"\n",
|
||||
" def on_event(\n",
|
||||
" self,\n",
|
||||
" request_id,\n",
|
||||
" transcription_result: TranscriptionResult,\n",
|
||||
" translation_result: TranslationResult,\n",
|
||||
" usage\n",
|
||||
" ) -> None:\n",
|
||||
" if transcription_result is not None:\n",
|
||||
" id = transcription_result.sentence_id\n",
|
||||
" text = transcription_result.text\n",
|
||||
" if transcription_result.stash is not None:\n",
|
||||
" stash = transcription_result.stash.text\n",
|
||||
" else:\n",
|
||||
" stash = \"\"\n",
|
||||
" print(f\"#{id}: {text}{stash}\")\n",
|
||||
" if usage: self.sentences.append(text)\n",
|
||||
" \n",
|
||||
" if translation_result is not None:\n",
|
||||
" lang = translation_result.get_language_list()[0]\n",
|
||||
" text = translation_result.get_translation(lang).text\n",
|
||||
" if translation_result.get_translation(lang).stash is not None:\n",
|
||||
" stash = translation_result.get_translation(lang).stash.text\n",
|
||||
" else:\n",
|
||||
" stash = \"\"\n",
|
||||
" print(f\"#{lang}: {text}{stash}\")\n",
|
||||
" if usage: self.translations.append(text)\n",
|
||||
" \n",
|
||||
" if usage: self.usage += usage['duration']"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"采样输入设备:\n",
|
||||
" - 序号:26\n",
|
||||
" - 名称:耳机 (HUAWEI FreeLace 活力版) [Loopback]\n",
|
||||
" - 最大输入通道数:2\n",
|
||||
" - 默认低输入延迟:0.003s\n",
|
||||
" - 默认高输入延迟:0.01s\n",
|
||||
" - 默认采样率:48000.0Hz\n",
|
||||
" - 是否回环设备:True\n",
|
||||
"\n",
|
||||
"音频样本块大小:4800\n",
|
||||
"样本位宽:2\n",
|
||||
"音频数据格式:8\n",
|
||||
"音频通道数:2\n",
|
||||
"音频采样率:48000\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"mic = pyaudio.PyAudio()\n",
|
||||
"default_speaker = getDefaultSpeakers(mic, False)\n",
|
||||
"\n",
|
||||
"SAMP_WIDTH = pyaudio.get_sample_size(pyaudio.paInt16)\n",
|
||||
"FORMAT = pyaudio.paInt16\n",
|
||||
"CHANNELS = default_speaker[\"maxInputChannels\"]\n",
|
||||
"RATE = int(default_speaker[\"defaultSampleRate\"])\n",
|
||||
"CHUNK = RATE // 10\n",
|
||||
"INDEX = default_speaker[\"index\"]\n",
|
||||
"\n",
|
||||
"dev_info = f\"\"\"\n",
|
||||
"采样输入设备:\n",
|
||||
" - 序号:{default_speaker['index']}\n",
|
||||
" - 名称:{default_speaker['name']}\n",
|
||||
" - 最大输入通道数:{default_speaker['maxInputChannels']}\n",
|
||||
" - 默认低输入延迟:{default_speaker['defaultLowInputLatency']}s\n",
|
||||
" - 默认高输入延迟:{default_speaker['defaultHighInputLatency']}s\n",
|
||||
" - 默认采样率:{default_speaker['defaultSampleRate']}Hz\n",
|
||||
" - 是否回环设备:{default_speaker['isLoopbackDevice']}\n",
|
||||
"\n",
|
||||
"音频样本块大小:{CHUNK}\n",
|
||||
"样本位宽:{SAMP_WIDTH}\n",
|
||||
"音频数据格式:{FORMAT}\n",
|
||||
"音频通道数:{CHANNELS}\n",
|
||||
"音频采样率:{RATE}\n",
|
||||
"\"\"\"\n",
|
||||
"print(dev_info)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"RECORD_SECONDS = 20 # 监听时长(s)\n",
|
||||
"\n",
|
||||
"stream = mic.open(\n",
|
||||
" format = FORMAT,\n",
|
||||
" channels = CHANNELS,\n",
|
||||
" rate = RATE,\n",
|
||||
" input = True,\n",
|
||||
" input_device_index = INDEX\n",
|
||||
")\n",
|
||||
"translator = TranslationRecognizerRealtime(\n",
|
||||
" model = \"gummy-realtime-v1\",\n",
|
||||
" format = \"pcm\",\n",
|
||||
" sample_rate = RATE,\n",
|
||||
" transcription_enabled = True,\n",
|
||||
" translation_enabled = True,\n",
|
||||
" source_language = \"ja\",\n",
|
||||
" translation_target_languages = [\"zh\"],\n",
|
||||
" callback = Callback()\n",
|
||||
")\n",
|
||||
"translator.start()\n",
|
||||
"\n",
|
||||
"for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):\n",
|
||||
" data = stream.read(CHUNK)\n",
|
||||
" data_np = np.frombuffer(data, dtype=np.int16)\n",
|
||||
" data_np_r = data_np.reshape(-1, CHANNELS)\n",
|
||||
" print(data_np_r.shape)\n",
|
||||
" mono_data = np.mean(data_np_r.astype(np.float32), axis=1)\n",
|
||||
" mono_data = mono_data.astype(np.int16)\n",
|
||||
" mono_data_bytes = mono_data.tobytes()\n",
|
||||
" translator.send_audio_frame(mono_data_bytes)\n",
|
||||
"\n",
|
||||
"translator.stop()\n",
|
||||
"stream.stop_stream()\n",
|
||||
"stream.close()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "mystd",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
189
engine-test/resample.ipynb
Normal file
@@ -0,0 +1,189 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "1e12f3ef",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
" 采样输入设备:\n",
|
||||
" - 设备类型:音频输出\n",
|
||||
" - 序号:0\n",
|
||||
" - 名称:BlackHole 2ch\n",
|
||||
" - 最大输入通道数:2\n",
|
||||
" - 默认低输入延迟:0.01s\n",
|
||||
" - 默认高输入延迟:0.1s\n",
|
||||
" - 默认采样率:48000.0Hz\n",
|
||||
"\n",
|
||||
" 音频样本块大小:2400\n",
|
||||
" 样本位宽:2\n",
|
||||
" 采样格式:8\n",
|
||||
" 音频通道数:2\n",
|
||||
" 音频采样率:48000\n",
|
||||
" \n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import sys\n",
|
||||
"import os\n",
|
||||
"import wave\n",
|
||||
"\n",
|
||||
"current_dir = os.getcwd() \n",
|
||||
"sys.path.append(os.path.join(current_dir, '../caption-engine'))\n",
|
||||
"\n",
|
||||
"from sysaudio.darwin import AudioStream\n",
|
||||
"from audioprcs import resampleRawChunk, mergeChunkChannels\n",
|
||||
"\n",
|
||||
"stream = AudioStream(0)\n",
|
||||
"stream.printInfo()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "a72914f4",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Recording...\n",
|
||||
"Done\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"\"\"\"获取系统音频输出5秒,然后保存为wav文件\"\"\"\n",
|
||||
"\n",
|
||||
"with wave.open('output.wav', 'wb') as wf:\n",
|
||||
" wf.setnchannels(stream.CHANNELS)\n",
|
||||
" wf.setsampwidth(stream.SAMP_WIDTH)\n",
|
||||
" wf.setframerate(stream.RATE)\n",
|
||||
" stream.openStream()\n",
|
||||
"\n",
|
||||
" print('Recording...')\n",
|
||||
"\n",
|
||||
" for _ in range(0, 100):\n",
|
||||
" chunk = stream.read_chunk()\n",
|
||||
" if isinstance(chunk, bytes):\n",
|
||||
" wf.writeframes(chunk)\n",
|
||||
" else:\n",
|
||||
" raise Exception('Error: chunk is not bytes')\n",
|
||||
" \n",
|
||||
" stream.closeStream() \n",
|
||||
" print('Done')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "a6e8a098",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Recording...\n",
|
||||
"Done\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"\"\"\"获取系统音频输入,转换为单通道音频,持续5秒,然后保存为wav文件\"\"\"\n",
|
||||
"\n",
|
||||
"with wave.open('output.wav', 'wb') as wf:\n",
|
||||
" wf.setnchannels(1)\n",
|
||||
" wf.setsampwidth(stream.SAMP_WIDTH)\n",
|
||||
" wf.setframerate(stream.RATE)\n",
|
||||
" stream.openStream()\n",
|
||||
"\n",
|
||||
" print('Recording...')\n",
|
||||
"\n",
|
||||
" for _ in range(0, 100):\n",
|
||||
" chunk = mergeChunkChannels(\n",
|
||||
" stream.read_chunk(),\n",
|
||||
" stream.CHANNELS\n",
|
||||
" )\n",
|
||||
" if isinstance(chunk, bytes):\n",
|
||||
" wf.writeframes(chunk)\n",
|
||||
" else:\n",
|
||||
" raise Exception('Error: chunk is not bytes')\n",
|
||||
" \n",
|
||||
" stream.closeStream() \n",
|
||||
" print('Done')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "aaca1465",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Recording...\n",
|
||||
"Done\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"\"\"\"获取系统音频输入,转换为单通道音频并重采样到16000Hz,持续5秒,然后保存为wav文件\"\"\"\n",
|
||||
"\n",
|
||||
"with wave.open('output.wav', 'wb') as wf:\n",
|
||||
" wf.setnchannels(1)\n",
|
||||
" wf.setsampwidth(stream.SAMP_WIDTH)\n",
|
||||
" wf.setframerate(16000)\n",
|
||||
" stream.openStream()\n",
|
||||
"\n",
|
||||
" print('Recording...')\n",
|
||||
"\n",
|
||||
" for _ in range(0, 100):\n",
|
||||
" chunk = resampleRawChunk(\n",
|
||||
" stream.read_chunk(),\n",
|
||||
" stream.CHANNELS,\n",
|
||||
" stream.RATE,\n",
|
||||
" 16000,\n",
|
||||
" mode=\"sinc_best\"\n",
|
||||
" )\n",
|
||||
" if isinstance(chunk, bytes):\n",
|
||||
" wf.writeframes(chunk)\n",
|
||||
" else:\n",
|
||||
" raise Exception('Error: chunk is not bytes')\n",
|
||||
" \n",
|
||||
" stream.closeStream() \n",
|
||||
" print('Done')"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
64
engine-test/trans.ipynb
Normal file
@@ -0,0 +1,64 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "440d4a07",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"d:\\Projects\\auto-caption\\caption-engine\\subenv\\Lib\\site-packages\\tqdm\\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
|
||||
" from .autonotebook import tqdm as notebook_tqdm\n",
|
||||
"None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"ename": "ImportError",
|
||||
"evalue": "\nMarianTokenizer requires the SentencePiece library but it was not found in your environment. Check out the instructions on the\ninstallation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones\nthat match your environment. Please note that you may need to restart your runtime after installation.\n",
|
||||
"output_type": "error",
|
||||
"traceback": [
|
||||
"\u001b[31m---------------------------------------------------------------------------\u001b[39m",
|
||||
"\u001b[31mImportError\u001b[39m Traceback (most recent call last)",
|
||||
"\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[1]\u001b[39m\u001b[32m, line 3\u001b[39m\n\u001b[32m 1\u001b[39m \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mtransformers\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mimport\u001b[39;00m MarianMTModel, MarianTokenizer\n\u001b[32m----> \u001b[39m\u001b[32m3\u001b[39m tokenizer = \u001b[43mMarianTokenizer\u001b[49m\u001b[43m.\u001b[49m\u001b[43mfrom_pretrained\u001b[49m(\u001b[33m\"\u001b[39m\u001b[33mHelsinki-NLP/opus-mt-en-zh\u001b[39m\u001b[33m\"\u001b[39m)\n\u001b[32m 4\u001b[39m model = MarianMTModel.from_pretrained(\u001b[33m\"\u001b[39m\u001b[33mHelsinki-NLP/opus-mt-en-zh\u001b[39m\u001b[33m\"\u001b[39m)\n\u001b[32m 6\u001b[39m tokenizer.save_pretrained(\u001b[33m\"\u001b[39m\u001b[33m./model_en_zh\u001b[39m\u001b[33m\"\u001b[39m)\n",
|
||||
"\u001b[36mFile \u001b[39m\u001b[32md:\\Projects\\auto-caption\\caption-engine\\subenv\\Lib\\site-packages\\transformers\\utils\\import_utils.py:1994\u001b[39m, in \u001b[36mDummyObject.__getattribute__\u001b[39m\u001b[34m(cls, key)\u001b[39m\n\u001b[32m 1992\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m (key.startswith(\u001b[33m\"\u001b[39m\u001b[33m_\u001b[39m\u001b[33m\"\u001b[39m) \u001b[38;5;129;01mand\u001b[39;00m key != \u001b[33m\"\u001b[39m\u001b[33m_from_config\u001b[39m\u001b[33m\"\u001b[39m) \u001b[38;5;129;01mor\u001b[39;00m key == \u001b[33m\"\u001b[39m\u001b[33mis_dummy\u001b[39m\u001b[33m\"\u001b[39m \u001b[38;5;129;01mor\u001b[39;00m key == \u001b[33m\"\u001b[39m\u001b[33mmro\u001b[39m\u001b[33m\"\u001b[39m \u001b[38;5;129;01mor\u001b[39;00m key == \u001b[33m\"\u001b[39m\u001b[33mcall\u001b[39m\u001b[33m\"\u001b[39m:\n\u001b[32m 1993\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28msuper\u001b[39m().\u001b[34m__getattribute__\u001b[39m(key)\n\u001b[32m-> \u001b[39m\u001b[32m1994\u001b[39m \u001b[43mrequires_backends\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mcls\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mcls\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43m_backends\u001b[49m\u001b[43m)\u001b[49m\n",
|
||||
"\u001b[36mFile \u001b[39m\u001b[32md:\\Projects\\auto-caption\\caption-engine\\subenv\\Lib\\site-packages\\transformers\\utils\\import_utils.py:1980\u001b[39m, in \u001b[36mrequires_backends\u001b[39m\u001b[34m(obj, backends)\u001b[39m\n\u001b[32m 1977\u001b[39m failed.append(msg.format(name))\n\u001b[32m 1979\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m failed:\n\u001b[32m-> \u001b[39m\u001b[32m1980\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mImportError\u001b[39;00m(\u001b[33m\"\u001b[39m\u001b[33m\"\u001b[39m.join(failed))\n",
|
||||
"\u001b[31mImportError\u001b[39m: \nMarianTokenizer requires the SentencePiece library but it was not found in your environment. Check out the instructions on the\ninstallation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones\nthat match your environment. Please note that you may need to restart your runtime after installation.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from transformers import MarianMTModel, MarianTokenizer\n",
|
||||
"\n",
|
||||
"tokenizer = MarianTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-en-zh\")\n",
|
||||
"model = MarianMTModel.from_pretrained(\"Helsinki-NLP/opus-mt-en-zh\")\n",
|
||||
"\n",
|
||||
"tokenizer.save_pretrained(\"./model_en_zh\")\n",
|
||||
"model.save_pretrained(\"./model_en_zh\")\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "subenv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
124
engine-test/vosk.ipynb
Normal file
@@ -0,0 +1,124 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "6fb12704",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"d:\\Projects\\auto-caption\\caption-engine\\subenv\\Lib\\site-packages\\vosk\\__init__.py\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import vosk\n",
|
||||
"print(vosk.__file__)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "63a06f5c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
" 采样设备:\n",
|
||||
" - 设备类型:音频输入\n",
|
||||
" - 序号:1\n",
|
||||
" - 名称:麦克风阵列 (Realtek(R) Audio)\n",
|
||||
" - 最大输入通道数:2\n",
|
||||
" - 默认低输入延迟:0.09s\n",
|
||||
" - 默认高输入延迟:0.18s\n",
|
||||
" - 默认采样率:44100.0Hz\n",
|
||||
" - 是否回环设备:False\n",
|
||||
"\n",
|
||||
" 音频样本块大小:2205\n",
|
||||
" 样本位宽:2\n",
|
||||
" 采样格式:8\n",
|
||||
" 音频通道数:2\n",
|
||||
" 音频采样率:44100\n",
|
||||
" \n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import sys\n",
|
||||
"import os\n",
|
||||
"import json\n",
|
||||
"from vosk import Model, KaldiRecognizer\n",
|
||||
"\n",
|
||||
"current_dir = os.getcwd() \n",
|
||||
"sys.path.append(os.path.join(current_dir, '../caption-engine'))\n",
|
||||
"\n",
|
||||
"from sysaudio.win import AudioStream\n",
|
||||
"from audioprcs import resampleRawChunk, mergeChunkChannels\n",
|
||||
"\n",
|
||||
"stream = AudioStream(1)\n",
|
||||
"stream.printInfo()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "5d5a0afa",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = Model(os.path.join(\n",
|
||||
" current_dir,\n",
|
||||
" '../caption-engine/models/vosk-model-small-cn-0.22'\n",
|
||||
"))\n",
|
||||
"recognizer = KaldiRecognizer(model, 16000)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7e9d1530",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"stream.openStream()\n",
|
||||
"\n",
|
||||
"for i in range(200):\n",
|
||||
" chunk = stream.read_chunk()\n",
|
||||
" chunk_mono = resampleRawChunk(chunk, stream.CHANNELS, stream.RATE, 16000)\n",
|
||||
" if recognizer.AcceptWaveform(chunk_mono):\n",
|
||||
" result = json.loads(recognizer.Result())\n",
|
||||
" print(\"acc:\", result.get(\"text\", \"\"))\n",
|
||||
" else:\n",
|
||||
" partial = json.loads(recognizer.PartialResult())\n",
|
||||
" print(\"else:\", partial.get(\"partial\", \"\"))"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "subenv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
52
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "auto-caption",
|
||||
"version": "0.1.0",
|
||||
"version": "0.4.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "auto-caption",
|
||||
"version": "0.1.0",
|
||||
"version": "0.4.0",
|
||||
"hasInstallScript": true,
|
||||
"dependencies": {
|
||||
"@electron-toolkit/preload": "^3.0.1",
|
||||
@@ -458,9 +458,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@electron/asar/node_modules/brace-expansion": {
|
||||
"version": "1.1.11",
|
||||
"resolved": "https://registry.npmmirror.com/brace-expansion/-/brace-expansion-1.1.11.tgz",
|
||||
"integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==",
|
||||
"version": "1.1.12",
|
||||
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz",
|
||||
"integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
@@ -1277,9 +1277,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@eslint/config-array/node_modules/brace-expansion": {
|
||||
"version": "1.1.11",
|
||||
"resolved": "https://registry.npmmirror.com/brace-expansion/-/brace-expansion-1.1.11.tgz",
|
||||
"integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==",
|
||||
"version": "1.1.12",
|
||||
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz",
|
||||
"integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
@@ -1348,9 +1348,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@eslint/eslintrc/node_modules/brace-expansion": {
|
||||
"version": "1.1.11",
|
||||
"resolved": "https://registry.npmmirror.com/brace-expansion/-/brace-expansion-1.1.11.tgz",
|
||||
"integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==",
|
||||
"version": "1.1.12",
|
||||
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz",
|
||||
"integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
@@ -3354,9 +3354,9 @@
|
||||
"optional": true
|
||||
},
|
||||
"node_modules/brace-expansion": {
|
||||
"version": "2.0.1",
|
||||
"resolved": "https://registry.npmmirror.com/brace-expansion/-/brace-expansion-2.0.1.tgz",
|
||||
"integrity": "sha512-XnAIvQ8eM+kC6aULx6wuQiwVsnzsi9d3WxzV3FpWTGA19F621kwdbsAcFKXgKUHZWsy+mY6iL1sHTxWEFCytDA==",
|
||||
"version": "2.0.2",
|
||||
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.2.tgz",
|
||||
"integrity": "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
@@ -4358,9 +4358,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/dir-compare/node_modules/brace-expansion": {
|
||||
"version": "1.1.11",
|
||||
"resolved": "https://registry.npmmirror.com/brace-expansion/-/brace-expansion-1.1.11.tgz",
|
||||
"integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==",
|
||||
"version": "1.1.12",
|
||||
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz",
|
||||
"integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
@@ -5146,9 +5146,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/eslint/node_modules/brace-expansion": {
|
||||
"version": "1.1.11",
|
||||
"resolved": "https://registry.npmmirror.com/brace-expansion/-/brace-expansion-1.1.11.tgz",
|
||||
"integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==",
|
||||
"version": "1.1.12",
|
||||
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz",
|
||||
"integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
@@ -5838,9 +5838,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/glob/node_modules/brace-expansion": {
|
||||
"version": "1.1.11",
|
||||
"resolved": "https://registry.npmmirror.com/brace-expansion/-/brace-expansion-1.1.11.tgz",
|
||||
"integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==",
|
||||
"version": "1.1.12",
|
||||
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz",
|
||||
"integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
@@ -6495,9 +6495,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/jake/node_modules/brace-expansion": {
|
||||
"version": "1.1.11",
|
||||
"resolved": "https://registry.npmmirror.com/brace-expansion/-/brace-expansion-1.1.11.tgz",
|
||||
"integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==",
|
||||
"version": "1.1.12",
|
||||
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz",
|
||||
"integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
{
|
||||
"name": "auto-caption",
|
||||
"version": "0.2.0",
|
||||
"productName": "Auto Caption",
|
||||
"version": "0.4.0",
|
||||
"description": "A cross-platform subtitle display software.",
|
||||
"main": "./out/main/index.js",
|
||||
"author": "himeditator",
|
||||
|
||||
|
Before Width: | Height: | Size: 25 KiB |
@@ -1,7 +1,7 @@
|
||||
import { shell, BrowserWindow, ipcMain } from 'electron'
|
||||
import path from 'path'
|
||||
import { is } from '@electron-toolkit/utils'
|
||||
import icon from '../../resources/icon.png?asset'
|
||||
import icon from '../../build/icon.png?asset'
|
||||
import { controlWindow } from './ControlWindow'
|
||||
|
||||
class CaptionWindow {
|
||||
@@ -16,16 +16,16 @@ class CaptionWindow {
|
||||
show: false,
|
||||
frame: false,
|
||||
transparent: true,
|
||||
alwaysOnTop: true,
|
||||
center: true,
|
||||
autoHideMenuBar: true,
|
||||
...(process.platform === 'linux' ? { icon } : {}),
|
||||
webPreferences: {
|
||||
preload: path.join(__dirname, '../preload/index.js'),
|
||||
sandbox: false
|
||||
}
|
||||
})
|
||||
|
||||
this.window.setAlwaysOnTop(true, 'screen-saver')
|
||||
|
||||
this.window.on('ready-to-show', () => {
|
||||
this.window?.show()
|
||||
})
|
||||
@@ -72,7 +72,8 @@ class CaptionWindow {
|
||||
|
||||
ipcMain.on('caption.pin.set', (_, pinned) => {
|
||||
if(this.window){
|
||||
this.window.setAlwaysOnTop(pinned)
|
||||
if(pinned) this.window.setAlwaysOnTop(true, 'screen-saver')
|
||||
else this.window.setAlwaysOnTop(false)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { shell, BrowserWindow, ipcMain, nativeTheme } from 'electron'
|
||||
import { shell, BrowserWindow, ipcMain, nativeTheme, dialog } from 'electron'
|
||||
import path from 'path'
|
||||
import { is } from '@electron-toolkit/utils'
|
||||
import icon from '../../resources/icon.png?asset'
|
||||
import icon from '../../build/icon.png?asset'
|
||||
import { captionWindow } from './CaptionWindow'
|
||||
import { allConfig } from './utils/AllConfig'
|
||||
import { captionEngine } from './utils/CaptionEngine'
|
||||
@@ -19,7 +19,6 @@ class ControlWindow {
|
||||
show: false,
|
||||
center: true,
|
||||
autoHideMenuBar: true,
|
||||
...(process.platform === 'linux' ? { icon } : {}),
|
||||
webPreferences: {
|
||||
preload: path.join(__dirname, '../preload/index.js'),
|
||||
sandbox: false
|
||||
@@ -66,8 +65,20 @@ class ControlWindow {
|
||||
})
|
||||
|
||||
ipcMain.handle('control.nativeTheme.get', () => {
|
||||
if(nativeTheme.shouldUseDarkColors) return 'dark'
|
||||
return 'light'
|
||||
if(allConfig.uiTheme === 'system'){
|
||||
if(nativeTheme.shouldUseDarkColors) return 'dark'
|
||||
return 'light'
|
||||
}
|
||||
return allConfig.uiTheme
|
||||
})
|
||||
|
||||
ipcMain.handle('control.folder.select', async () => {
|
||||
const result = await dialog.showOpenDialog({
|
||||
properties: ['openDirectory']
|
||||
});
|
||||
|
||||
if (result.canceled) return "";
|
||||
return result.filePaths[0];
|
||||
})
|
||||
|
||||
ipcMain.on('control.uiLanguage.change', (_, args) => {
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
export default {
|
||||
"gummy.env.missing": "DASHSCOPE_API_KEY environment variable not detected. To use the gummy engine, you need to obtain an API Key from Alibaba Cloud's Bailian platform and add it to your local environment variables.",
|
||||
"gummy.key.missing": "API KEY is not set, and the DASHSCOPE_API_KEY environment variable is not detected. To use the gummy engine, you need to obtain an API KEY from the Alibaba Cloud Bailian platform and add it to the settings or configure it in the local environment variables.",
|
||||
"platform.unsupported": "Unsupported platform: ",
|
||||
"engine.start.error": "Caption engine failed to start: ",
|
||||
"engine.output.parse.error": "Unable to parse caption engine output as a JSON object: ",
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
export default {
|
||||
"gummy.env.missing": "DASHSCOPE_API_KEY 環境変数が検出されませんでした。Gummy エンジンを使用するには、Alibaba Cloud の百煉プラットフォームから API Key を取得し、ローカル環境変数に追加する必要があります。",
|
||||
"gummy.key.missing": "API KEY が設定されておらず、DASHSCOPE_API_KEY 環境変数も検出されていません。Gummy エンジンを使用するには、Alibaba Cloud Bailian プラットフォームから API KEY を取得し、設定に追加するか、ローカルの環境変数に設定する必要があります。",
|
||||
"platform.unsupported": "サポートされていないプラットフォーム: ",
|
||||
"engine.start.error": "字幕エンジンの起動に失敗しました: ",
|
||||
"engine.output.parse.error": "字幕エンジンの出力を JSON オブジェクトとして解析できませんでした: ",
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
export default {
|
||||
"gummy.env.missing": "没有检测到 DASHSCOPE_API_KEY 环境变量,如果要使用 gummy 引擎,需要在阿里云百炼平台获取 API Key 并添加到本机环境变量",
|
||||
"gummy.key.missing": "没有设置 API KEY,也没有检测到 DASHSCOPE_API_KEY 环境变量。如果要使用 gummy 引擎,需要在阿里云百炼平台获取 API KEY,并在添加到设置中或者配置到本机环境变量。",
|
||||
"platform.unsupported": "不支持的平台:",
|
||||
"engine.start.error": "字幕引擎启动失败:",
|
||||
"engine.output.parse.error": "字幕引擎输出内容无法解析为 JSON 对象:",
|
||||
|
||||
@@ -6,9 +6,11 @@ export interface Controls {
|
||||
engineEnabled: boolean,
|
||||
sourceLang: string,
|
||||
targetLang: string,
|
||||
engine: 'gummy',
|
||||
engine: string,
|
||||
audio: 0 | 1,
|
||||
translation: boolean,
|
||||
API_KEY: string,
|
||||
modelPath: string,
|
||||
customized: boolean,
|
||||
customizedApp: string,
|
||||
customizedCommand: string
|
||||
@@ -19,13 +21,20 @@ export interface Styles {
|
||||
fontFamily: string,
|
||||
fontSize: number,
|
||||
fontColor: string,
|
||||
fontWeight: number,
|
||||
background: string,
|
||||
opacity: number,
|
||||
showPreview: boolean,
|
||||
transDisplay: boolean,
|
||||
transFontFamily: string,
|
||||
transFontSize: number,
|
||||
transFontColor: string
|
||||
transFontColor: string,
|
||||
transFontWeight: number,
|
||||
textShadow: boolean,
|
||||
offsetX: number,
|
||||
offsetY: number,
|
||||
blur: number,
|
||||
textShadowColor: string
|
||||
}
|
||||
|
||||
export interface CaptionItem {
|
||||
@@ -37,6 +46,7 @@ export interface CaptionItem {
|
||||
}
|
||||
|
||||
export interface FullConfig {
|
||||
platform: string,
|
||||
uiLanguage: UILanguage,
|
||||
uiTheme: UITheme,
|
||||
leftBarWidth: number,
|
||||
|
||||
@@ -11,13 +11,20 @@ const defaultStyles: Styles = {
|
||||
fontFamily: 'sans-serif',
|
||||
fontSize: 24,
|
||||
fontColor: '#000000',
|
||||
fontWeight: 4,
|
||||
background: '#dbe2ef',
|
||||
opacity: 80,
|
||||
showPreview: true,
|
||||
transDisplay: true,
|
||||
transFontFamily: 'sans-serif',
|
||||
transFontSize: 24,
|
||||
transFontColor: '#000000'
|
||||
transFontColor: '#000000',
|
||||
transFontWeight: 4,
|
||||
textShadow: false,
|
||||
offsetX: 2,
|
||||
offsetY: 2,
|
||||
blur: 0,
|
||||
textShadowColor: '#ffffff'
|
||||
};
|
||||
|
||||
const defaultControls: Controls = {
|
||||
@@ -26,6 +33,8 @@ const defaultControls: Controls = {
|
||||
engine: 'gummy',
|
||||
audio: 0,
|
||||
engineEnabled: false,
|
||||
API_KEY: '',
|
||||
modelPath: '',
|
||||
translation: true,
|
||||
customized: false,
|
||||
customizedApp: '',
|
||||
@@ -51,6 +60,7 @@ class AllConfig {
|
||||
if(config.uiTheme) this.uiTheme = config.uiTheme
|
||||
if(config.leftBarWidth) this.leftBarWidth = config.leftBarWidth
|
||||
if(config.styles) this.setStyles(config.styles)
|
||||
if(process.platform !== 'win32' && process.platform !== 'darwin') config.controls.audio = 1
|
||||
if(config.controls) this.setControls(config.controls)
|
||||
console.log('[INFO] Read Config from:', configPath)
|
||||
}
|
||||
@@ -71,6 +81,7 @@ class AllConfig {
|
||||
|
||||
public getFullConfig(): FullConfig {
|
||||
return {
|
||||
platform: process.platform,
|
||||
uiLanguage: this.uiLanguage,
|
||||
uiTheme: this.uiTheme,
|
||||
leftBarWidth: this.leftBarWidth,
|
||||
@@ -80,7 +91,7 @@ class AllConfig {
|
||||
}
|
||||
}
|
||||
|
||||
public setStyles(args: Styles) {
|
||||
public setStyles(args: Object) {
|
||||
for(let key in this.styles) {
|
||||
if(key in args) {
|
||||
this.styles[key] = args[key]
|
||||
@@ -98,7 +109,7 @@ class AllConfig {
|
||||
console.log(`[INFO] Send Styles to #${window.id}:`, this.styles)
|
||||
}
|
||||
|
||||
public setControls(args: Controls) {
|
||||
public setControls(args: Object) {
|
||||
const engineEnabled = this.controls.engineEnabled
|
||||
for(let key in this.controls){
|
||||
if(key in args) {
|
||||
|
||||
@@ -13,26 +13,20 @@ export class CaptionEngine {
|
||||
processStatus: 'running' | 'stopping' | 'stopped' = 'stopped'
|
||||
|
||||
private getApp(): boolean {
|
||||
allConfig.controls.customized = false
|
||||
if (allConfig.controls.customized && allConfig.controls.customizedApp) {
|
||||
this.appPath = allConfig.controls.customizedApp
|
||||
this.command = [allConfig.controls.customizedCommand]
|
||||
allConfig.controls.customized = true
|
||||
}
|
||||
else if (allConfig.controls.engine === 'gummy') {
|
||||
allConfig.controls.customized = false
|
||||
if(!process.env.DASHSCOPE_API_KEY) {
|
||||
controlWindow.sendErrorMessage(i18n('gummy.env.missing'))
|
||||
if(!allConfig.controls.API_KEY && !process.env.DASHSCOPE_API_KEY) {
|
||||
controlWindow.sendErrorMessage(i18n('gummy.key.missing'))
|
||||
return false
|
||||
}
|
||||
let gummyName = ''
|
||||
let gummyName = 'main-gummy'
|
||||
if (process.platform === 'win32') {
|
||||
gummyName = 'main-gummy.exe'
|
||||
}
|
||||
else if (process.platform === 'linux') {
|
||||
gummyName = 'main-gummy'
|
||||
}
|
||||
else {
|
||||
controlWindow.sendErrorMessage(i18n('platform.unsupported') + process.platform)
|
||||
throw new Error(i18n('platform.unsupported'))
|
||||
gummyName += '.exe'
|
||||
}
|
||||
if (is.dev) {
|
||||
this.appPath = path.join(
|
||||
@@ -42,8 +36,7 @@ export class CaptionEngine {
|
||||
}
|
||||
else {
|
||||
this.appPath = path.join(
|
||||
process.resourcesPath,
|
||||
'caption-engine', 'dist', gummyName
|
||||
process.resourcesPath, 'caption-engine', gummyName
|
||||
)
|
||||
}
|
||||
this.command = []
|
||||
@@ -53,15 +46,37 @@ export class CaptionEngine {
|
||||
allConfig.controls.targetLang : 'none'
|
||||
)
|
||||
this.command.push('-a', allConfig.controls.audio ? '1' : '0')
|
||||
|
||||
console.log('[INFO] Engine Path:', this.appPath)
|
||||
console.log('[INFO] Engine Command:', this.command)
|
||||
if(allConfig.controls.API_KEY) {
|
||||
this.command.push('-k', allConfig.controls.API_KEY)
|
||||
}
|
||||
}
|
||||
else if(allConfig.controls.engine === 'vosk'){
|
||||
let voskName = 'main-vosk'
|
||||
if (process.platform === 'win32') {
|
||||
voskName += '.exe'
|
||||
}
|
||||
if (is.dev) {
|
||||
this.appPath = path.join(
|
||||
app.getAppPath(),
|
||||
'caption-engine', 'dist', voskName
|
||||
)
|
||||
}
|
||||
else {
|
||||
this.appPath = path.join(
|
||||
process.resourcesPath, 'caption-engine', voskName
|
||||
)
|
||||
}
|
||||
this.command = []
|
||||
this.command.push('-a', allConfig.controls.audio ? '1' : '0')
|
||||
this.command.push('-m', `"${allConfig.controls.modelPath}"`)
|
||||
}
|
||||
console.log('[INFO] Engine Path:', this.appPath)
|
||||
console.log('[INFO] Engine Command:', this.command)
|
||||
return true
|
||||
}
|
||||
|
||||
public start() {
|
||||
if (this.processStatus!== 'stopped') {
|
||||
if (this.processStatus !== 'stopped') {
|
||||
return
|
||||
}
|
||||
if(!this.getApp()){ return }
|
||||
@@ -122,18 +137,29 @@ export class CaptionEngine {
|
||||
|
||||
public stop() {
|
||||
if(this.processStatus !== 'running') return
|
||||
if (this.process) {
|
||||
if (this.process.pid) {
|
||||
console.log('[INFO] Trying to stop process, PID:', this.process.pid)
|
||||
if (process.platform === "win32" && this.process.pid) {
|
||||
exec(`taskkill /pid ${this.process.pid} /t /f`, (error) => {
|
||||
if (error) {
|
||||
controlWindow.sendErrorMessage(i18n('engine.shutdown.error') + error)
|
||||
console.error(`[ERROR] Failed to kill process: ${error}`)
|
||||
}
|
||||
});
|
||||
} else {
|
||||
this.process.kill('SIGKILL');
|
||||
let cmd = `kill ${this.process.pid}`;
|
||||
if (process.platform === "win32") {
|
||||
cmd = `taskkill /pid ${this.process.pid} /t /f`
|
||||
}
|
||||
exec(cmd, (error) => {
|
||||
if (error) {
|
||||
controlWindow.sendErrorMessage(i18n('engine.shutdown.error') + error)
|
||||
console.error(`[ERROR] Failed to kill process: ${error}`)
|
||||
}
|
||||
})
|
||||
}
|
||||
else {
|
||||
this.process = undefined;
|
||||
allConfig.controls.engineEnabled = false
|
||||
if(controlWindow.window){
|
||||
allConfig.sendControls(controlWindow.window)
|
||||
controlWindow.window.webContents.send('control.engine.stopped')
|
||||
}
|
||||
this.processStatus = 'stopped'
|
||||
console.log('[INFO] Process PID undefined, caption engine process stopped')
|
||||
return
|
||||
}
|
||||
this.processStatus = 'stopping'
|
||||
console.log('[INFO] Caption engine process stopping')
|
||||
|
||||
@@ -16,6 +16,7 @@ onMounted(() => {
|
||||
useGeneralSettingStore().uiTheme = data.uiTheme
|
||||
useGeneralSettingStore().leftBarWidth = data.leftBarWidth
|
||||
useCaptionStyleStore().setStyles(data.styles)
|
||||
useEngineControlStore().platform = data.platform
|
||||
useEngineControlStore().setControls(data.controls)
|
||||
useCaptionLogStore().captionData = data.captionLog
|
||||
})
|
||||
|
||||
@@ -11,6 +11,8 @@
|
||||
|
||||
.switch-label {
|
||||
display: inline-block;
|
||||
min-width: 80px;
|
||||
text-align: right;
|
||||
margin-right: 10px;
|
||||
}
|
||||
|
||||
|
||||
@@ -9,15 +9,36 @@
|
||||
style="margin-right: 20px;"
|
||||
@click="exportCaptions"
|
||||
:disabled="captionData.length === 0"
|
||||
>
|
||||
{{ $t('log.export') }}
|
||||
</a-button>
|
||||
>{{ $t('log.export') }}</a-button>
|
||||
|
||||
<a-popover :title="$t('log.copyOptions')">
|
||||
<template #content>
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('log.addIndex') }}</span>
|
||||
<a-switch v-model:checked="showIndex" />
|
||||
<span class="input-label">{{ $t('log.copyTime') }}</span>
|
||||
<a-switch v-model:checked="copyTime" />
|
||||
</div>
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('log.copyContent') }}</span>
|
||||
<a-radio-group v-model:value="copyOption">
|
||||
<a-radio-button value="both">{{ $t('log.both') }}</a-radio-button>
|
||||
<a-radio-button value="source">{{ $t('log.source') }}</a-radio-button>
|
||||
<a-radio-button value="target">{{ $t('log.translation') }}</a-radio-button>
|
||||
</a-radio-group>
|
||||
</div>
|
||||
</template>
|
||||
<a-button
|
||||
style="margin-right: 20px;"
|
||||
@click="copyCaptions"
|
||||
:disabled="captionData.length === 0"
|
||||
>{{ $t('log.copy') }}</a-button>
|
||||
</a-popover>
|
||||
|
||||
<a-button
|
||||
danger
|
||||
@click="clearCaptions"
|
||||
>
|
||||
{{ $t('log.clear') }}
|
||||
</a-button>
|
||||
>{{ $t('log.clear') }}</a-button>
|
||||
</div>
|
||||
<a-table
|
||||
:columns="columns"
|
||||
@@ -49,8 +70,17 @@
|
||||
import { ref } from 'vue'
|
||||
import { storeToRefs } from 'pinia'
|
||||
import { useCaptionLogStore } from '@renderer/stores/captionLog'
|
||||
import { message } from 'ant-design-vue'
|
||||
import { useI18n } from 'vue-i18n'
|
||||
const { t } = useI18n()
|
||||
|
||||
const captionLog = useCaptionLogStore()
|
||||
const { captionData } = storeToRefs(captionLog)
|
||||
|
||||
const showIndex = ref(true)
|
||||
const copyTime = ref(true)
|
||||
const copyOption = ref('both')
|
||||
|
||||
const pagination = ref({
|
||||
current: 1,
|
||||
pageSize: 10,
|
||||
@@ -101,12 +131,28 @@ function exportCaptions() {
|
||||
URL.revokeObjectURL(url)
|
||||
}
|
||||
|
||||
function copyCaptions() {
|
||||
let content = ''
|
||||
for(let i = 0; i < captionData.value.length; i++){
|
||||
const item = captionData.value[i]
|
||||
if(showIndex.value) content += `${i+1}\n`
|
||||
if(copyTime.value) content += `${item.time_s} --> ${item.time_t}\n`.replace(/\./g, ',')
|
||||
if(copyOption.value === 'both') content += `${item.text}\n${item.translation}\n\n`
|
||||
else if(copyOption.value === 'source') content += `${item.text}\n\n`
|
||||
else content += `${item.translation}\n\n`
|
||||
}
|
||||
navigator.clipboard.writeText(content)
|
||||
message.success(t('log.copySuccess'))
|
||||
}
|
||||
|
||||
function clearCaptions() {
|
||||
captionLog.clear()
|
||||
}
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
@import url(../assets/input.css);
|
||||
|
||||
.caption-list {
|
||||
padding: 20px;
|
||||
border-radius: 8px;
|
||||
|
||||
@@ -22,6 +22,7 @@
|
||||
v-model:value="currentFontFamily"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('style.fontColor') }}</span>
|
||||
<a-input
|
||||
@@ -41,6 +42,16 @@
|
||||
/>
|
||||
<div class="input-item-value">{{ currentFontSize }}px</div>
|
||||
</div>
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('style.fontWeight') }}</span>
|
||||
<a-input
|
||||
class="input-area"
|
||||
type="range"
|
||||
min="1" max="9"
|
||||
v-model:value="currentFontWeight"
|
||||
/>
|
||||
<div class="input-item-value">{{ currentFontWeight*100 }}</div>
|
||||
</div>
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('style.background') }}</span>
|
||||
<a-input
|
||||
@@ -70,6 +81,11 @@
|
||||
<span class="switch-label">{{ $t('style.translation') }}</span>
|
||||
<a-switch v-model:checked="currentTransDisplay" />
|
||||
</div>
|
||||
<span style="display:inline-block;width:20px;"></span>
|
||||
<div style="display: inline-block;">
|
||||
<span class="switch-label">{{ $t('style.textShadow') }}</span>
|
||||
<a-switch v-model:checked="currentTextShadow" />
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div v-show="currentTransDisplay">
|
||||
@@ -103,6 +119,60 @@
|
||||
/>
|
||||
<div class="input-item-value">{{ currentTransFontSize }}px</div>
|
||||
</div>
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('style.fontWeight') }}</span>
|
||||
<a-input
|
||||
class="input-area"
|
||||
type="range"
|
||||
min="1" max="9"
|
||||
v-model:value="currentTransFontWeight"
|
||||
/>
|
||||
<div class="input-item-value">{{ currentTransFontWeight*100 }}</div>
|
||||
</div>
|
||||
</a-card>
|
||||
</div>
|
||||
|
||||
<div v-show="currentTextShadow" style="margin-top:10px;">
|
||||
<a-card size="small" :title="$t('style.shadow.title')">
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('style.shadow.offsetX') }}</span>
|
||||
<a-input
|
||||
class="input-area"
|
||||
type="range"
|
||||
min="-10" max="10"
|
||||
v-model:value="currentOffsetX"
|
||||
/>
|
||||
<div class="input-item-value">{{ currentOffsetX }}px</div>
|
||||
</div>
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('style.shadow.offsetY') }}</span>
|
||||
<a-input
|
||||
class="input-area"
|
||||
type="range"
|
||||
min="-10" max="10"
|
||||
v-model:value="currentOffsetY"
|
||||
/>
|
||||
<div class="input-item-value">{{ currentOffsetY }}px</div>
|
||||
</div>
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('style.shadow.blur') }}</span>
|
||||
<a-input
|
||||
class="input-area"
|
||||
type="range"
|
||||
min="0" max="10"
|
||||
v-model:value="currentBlur"
|
||||
/>
|
||||
<div class="input-item-value">{{ currentBlur }}px</div>
|
||||
</div>
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('style.shadow.color') }}</span>
|
||||
<a-input
|
||||
class="input-area"
|
||||
type="color"
|
||||
v-model:value="currentTextShadowColor"
|
||||
/>
|
||||
<div class="input-item-value">{{ currentTextShadowColor }}</div>
|
||||
</div>
|
||||
</a-card>
|
||||
</div>
|
||||
</a-card>
|
||||
@@ -112,24 +182,27 @@
|
||||
v-if="currentPreview"
|
||||
class="preview-container"
|
||||
:style="{
|
||||
backgroundColor: addOpicityToColor(currentBackground, currentOpacity)
|
||||
backgroundColor: addOpicityToColor(currentBackground, currentOpacity),
|
||||
textShadow: currentTextShadow ? `${currentOffsetX}px ${currentOffsetY}px ${currentBlur}px ${currentTextShadowColor}` : 'none'
|
||||
}"
|
||||
>
|
||||
<p :class="[captionStyle.lineBreak?'':'left-ellipsis']"
|
||||
<p :class="[currentLineBreak?'':'left-ellipsis']"
|
||||
:style="{
|
||||
fontFamily: currentFontFamily,
|
||||
fontSize: currentFontSize + 'px',
|
||||
color: currentFontColor
|
||||
color: currentFontColor,
|
||||
fontWeight: currentFontWeight * 100
|
||||
}">
|
||||
<span v-if="captionData.length">{{ captionData[captionData.length-1].text }}</span>
|
||||
<span v-else>{{ $t('example.original') }}</span>
|
||||
</p>
|
||||
<p :class="[captionStyle.lineBreak?'':'left-ellipsis']"
|
||||
<p :class="[currentLineBreak?'':'left-ellipsis']"
|
||||
v-if="currentTransDisplay"
|
||||
:style="{
|
||||
fontFamily: currentTransFontFamily,
|
||||
fontSize: currentTransFontSize + 'px',
|
||||
color: currentTransFontColor
|
||||
color: currentTransFontColor,
|
||||
fontWeight: currentTransFontWeight * 100
|
||||
}"
|
||||
>
|
||||
<span v-if="captionData.length">{{ captionData[captionData.length-1].translation }}</span>
|
||||
@@ -147,7 +220,6 @@ import { storeToRefs } from 'pinia'
|
||||
import { notification } from 'ant-design-vue'
|
||||
import { useI18n } from 'vue-i18n'
|
||||
import { useCaptionLogStore } from '@renderer/stores/captionLog';
|
||||
|
||||
const captionLog = useCaptionLogStore();
|
||||
const { captionData } = storeToRefs(captionLog);
|
||||
|
||||
@@ -160,6 +232,7 @@ const currentLineBreak = ref<number>(0)
|
||||
const currentFontFamily = ref<string>('sans-serif')
|
||||
const currentFontSize = ref<number>(24)
|
||||
const currentFontColor = ref<string>('#000000')
|
||||
const currentFontWeight = ref<number>(4)
|
||||
const currentBackground = ref<string>('#dbe2ef')
|
||||
const currentOpacity = ref<number>(50)
|
||||
const currentPreview = ref<boolean>(true)
|
||||
@@ -167,6 +240,12 @@ const currentTransDisplay = ref<boolean>(true)
|
||||
const currentTransFontFamily = ref<string>('sans-serif')
|
||||
const currentTransFontSize = ref<number>(24)
|
||||
const currentTransFontColor = ref<string>('#000000')
|
||||
const currentTransFontWeight = ref<number>(4)
|
||||
const currentTextShadow = ref<boolean>(false)
|
||||
const currentOffsetX = ref<number>(2)
|
||||
const currentOffsetY = ref<number>(2)
|
||||
const currentBlur = ref<number>(0)
|
||||
const currentTextShadowColor = ref<string>('#ffffff')
|
||||
|
||||
function addOpicityToColor(color: string, opicity: number) {
|
||||
const opicityValue = Math.round(opicity * 255 / 100);
|
||||
@@ -178,6 +257,7 @@ function useSameStyle(){
|
||||
currentTransFontFamily.value = currentFontFamily.value;
|
||||
currentTransFontSize.value = currentFontSize.value;
|
||||
currentTransFontColor.value = currentFontColor.value;
|
||||
currentTransFontWeight.value = currentFontWeight.value;
|
||||
}
|
||||
|
||||
function applyStyle(){
|
||||
@@ -185,6 +265,7 @@ function applyStyle(){
|
||||
captionStyle.fontFamily = currentFontFamily.value;
|
||||
captionStyle.fontSize = currentFontSize.value;
|
||||
captionStyle.fontColor = currentFontColor.value;
|
||||
captionStyle.fontWeight = currentFontWeight.value;
|
||||
captionStyle.background = currentBackground.value;
|
||||
captionStyle.opacity = currentOpacity.value;
|
||||
captionStyle.showPreview = currentPreview.value;
|
||||
@@ -192,6 +273,12 @@ function applyStyle(){
|
||||
captionStyle.transFontFamily = currentTransFontFamily.value;
|
||||
captionStyle.transFontSize = currentTransFontSize.value;
|
||||
captionStyle.transFontColor = currentTransFontColor.value;
|
||||
captionStyle.transFontWeight = currentTransFontWeight.value;
|
||||
captionStyle.textShadow = currentTextShadow.value;
|
||||
captionStyle.offsetX = currentOffsetX.value;
|
||||
captionStyle.offsetY = currentOffsetY.value;
|
||||
captionStyle.blur = currentBlur.value;
|
||||
captionStyle.textShadowColor = currentTextShadowColor.value;
|
||||
|
||||
captionStyle.sendStylesChange();
|
||||
|
||||
@@ -206,6 +293,7 @@ function backStyle(){
|
||||
currentFontFamily.value = captionStyle.fontFamily;
|
||||
currentFontSize.value = captionStyle.fontSize;
|
||||
currentFontColor.value = captionStyle.fontColor;
|
||||
currentFontWeight.value = captionStyle.fontWeight;
|
||||
currentBackground.value = captionStyle.background;
|
||||
currentOpacity.value = captionStyle.opacity;
|
||||
currentPreview.value = captionStyle.showPreview;
|
||||
@@ -213,6 +301,12 @@ function backStyle(){
|
||||
currentTransFontFamily.value = captionStyle.transFontFamily;
|
||||
currentTransFontSize.value = captionStyle.transFontSize;
|
||||
currentTransFontColor.value = captionStyle.transFontColor;
|
||||
currentTransFontWeight.value = captionStyle.transFontWeight;
|
||||
currentTextShadow.value = captionStyle.textShadow;
|
||||
currentOffsetX.value = captionStyle.offsetX;
|
||||
currentOffsetY.value = captionStyle.offsetY;
|
||||
currentBlur.value = captionStyle.blur;
|
||||
currentTextShadowColor.value = captionStyle.textShadowColor;
|
||||
}
|
||||
|
||||
function resetStyle() {
|
||||
@@ -229,6 +323,16 @@ watch(changeSignal, (val) => {
|
||||
|
||||
<style scoped>
|
||||
@import url(../assets/input.css);
|
||||
.general-note {
|
||||
padding: 10px 10px 0;
|
||||
max-width: min(36vw, 400px);
|
||||
}
|
||||
|
||||
.hover-label {
|
||||
color: #1668dc;
|
||||
cursor: pointer;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.preview-container {
|
||||
line-height: 2em;
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('engine.transLang') }}</span>
|
||||
<a-select
|
||||
:disabled="currentEngine === 'vosk'"
|
||||
class="input-area"
|
||||
v-model:value="currentTargetLang"
|
||||
:options="langList.filter((item) => item.value !== 'auto')"
|
||||
@@ -32,6 +33,7 @@
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('engine.audioType') }}</span>
|
||||
<a-select
|
||||
:disabled="platform !== 'win32' && platform !== 'darwin'"
|
||||
class="input-area"
|
||||
v-model:value="currentAudio"
|
||||
:options="audioType"
|
||||
@@ -42,36 +44,73 @@
|
||||
<a-switch v-model:checked="currentTranslation" />
|
||||
<span style="display:inline-block;width:20px;"></span>
|
||||
<div style="display: inline-block;">
|
||||
<span class="switch-label">{{ $t('engine.customEngine') }}</span>
|
||||
<a-switch v-model:checked="currentCustomized" />
|
||||
<span class="switch-label">{{ $t('engine.showMore') }}</span>
|
||||
<a-switch v-model:checked="showMore" />
|
||||
</div>
|
||||
</div>
|
||||
<div v-show="currentCustomized">
|
||||
<a-card size="small" :title="$t('engine.custom.title')">
|
||||
<template #extra>
|
||||
<a-popover>
|
||||
<template #content>
|
||||
<p class="customize-note">{{ $t('engine.custom.note') }}</p>
|
||||
</template>
|
||||
<a><InfoCircleOutlined />{{ $t('engine.custom.attention') }}</a>
|
||||
</a-popover>
|
||||
</template>
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('engine.custom.app') }}</span>
|
||||
<a-input
|
||||
class="input-area"
|
||||
v-model:value="currentCustomizedApp"
|
||||
></a-input>
|
||||
</div>
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('engine.custom.command') }}</span>
|
||||
<a-input
|
||||
class="input-area"
|
||||
v-model:value="currentCustomizedCommand"
|
||||
></a-input>
|
||||
</div>
|
||||
</a-card>
|
||||
</div>
|
||||
|
||||
<a-card size="small" :title="$t('engine.showMore')" v-show="showMore">
|
||||
<div class="input-item">
|
||||
<a-popover>
|
||||
<template #content>
|
||||
<p class="label-hover-info">{{ $t('engine.apikeyInfo') }}</p>
|
||||
</template>
|
||||
<span class="input-label info-label">{{ $t('engine.apikey') }}</span>
|
||||
</a-popover>
|
||||
<a-input
|
||||
class="input-area"
|
||||
type="password"
|
||||
v-model:value="currentAPI_KEY"
|
||||
/>
|
||||
</div>
|
||||
<div class="input-item">
|
||||
<a-popover>
|
||||
<template #content>
|
||||
<p class="label-hover-info">{{ $t('engine.modelPathInfo') }}</p>
|
||||
</template>
|
||||
<span class="input-label info-label">{{ $t('engine.modelPath') }}</span>
|
||||
</a-popover>
|
||||
<span
|
||||
class="input-folder"
|
||||
@click="selectFolderPath"
|
||||
><span><FolderOpenOutlined /></span></span>
|
||||
<a-input
|
||||
class="input-area"
|
||||
style="width:calc(100% - 140px);"
|
||||
v-model:value="currentModelPath"
|
||||
/>
|
||||
</div>
|
||||
<div class="input-item">
|
||||
<span style="margin-right:5px;">{{ $t('engine.customEngine') }}</span>
|
||||
<a-switch v-model:checked="currentCustomized" />
|
||||
</div>
|
||||
<div v-show="currentCustomized">
|
||||
<a-card size="small" :title="$t('engine.custom.title')">
|
||||
<template #extra>
|
||||
<a-popover>
|
||||
<template #content>
|
||||
<p class="customize-note">{{ $t('engine.custom.note') }}</p>
|
||||
</template>
|
||||
<a><InfoCircleOutlined />{{ $t('engine.custom.attention') }}</a>
|
||||
</a-popover>
|
||||
</template>
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('engine.custom.app') }}</span>
|
||||
<a-input
|
||||
class="input-area"
|
||||
v-model:value="currentCustomizedApp"
|
||||
></a-input>
|
||||
</div>
|
||||
<div class="input-item">
|
||||
<span class="input-label">{{ $t('engine.custom.command') }}</span>
|
||||
<a-input
|
||||
class="input-area"
|
||||
v-model:value="currentCustomizedCommand"
|
||||
></a-input>
|
||||
</div>
|
||||
</a-card>
|
||||
</div>
|
||||
</a-card>
|
||||
</a-card>
|
||||
<div style="height: 20px;"></div>
|
||||
</template>
|
||||
@@ -79,22 +118,25 @@
|
||||
<script setup lang="ts">
|
||||
import { ref, computed, watch } from 'vue'
|
||||
import { storeToRefs } from 'pinia'
|
||||
import { useGeneralSettingStore } from '@renderer/stores/generalSetting'
|
||||
import { useEngineControlStore } from '@renderer/stores/engineControl'
|
||||
import { notification } from 'ant-design-vue'
|
||||
import { InfoCircleOutlined } from '@ant-design/icons-vue';
|
||||
import { FolderOpenOutlined ,InfoCircleOutlined } from '@ant-design/icons-vue';
|
||||
import { useI18n } from 'vue-i18n'
|
||||
|
||||
const { t } = useI18n()
|
||||
const showMore = ref(false)
|
||||
|
||||
const engineControl = useEngineControlStore()
|
||||
const { captionEngine, audioType, changeSignal } = storeToRefs(engineControl)
|
||||
const { platform, captionEngine, audioType, changeSignal } = storeToRefs(engineControl)
|
||||
|
||||
const currentSourceLang = ref('auto')
|
||||
const currentTargetLang = ref('zh')
|
||||
const currentEngine = ref<'gummy'>('gummy')
|
||||
const currentEngine = ref<string>('gummy')
|
||||
const currentAudio = ref<0 | 1>(0)
|
||||
const currentTranslation = ref<boolean>(false)
|
||||
|
||||
const currentAPI_KEY = ref<string>('')
|
||||
const currentModelPath = ref<string>('')
|
||||
const currentCustomized = ref<boolean>(false)
|
||||
const currentCustomizedApp = ref('')
|
||||
const currentCustomizedCommand = ref('')
|
||||
@@ -114,7 +156,8 @@ function applyChange(){
|
||||
engineControl.engine = currentEngine.value
|
||||
engineControl.audio = currentAudio.value
|
||||
engineControl.translation = currentTranslation.value
|
||||
|
||||
engineControl.API_KEY = currentAPI_KEY.value
|
||||
engineControl.modelPath = currentModelPath.value
|
||||
engineControl.customized = currentCustomized.value
|
||||
engineControl.customizedApp = currentCustomizedApp.value
|
||||
engineControl.customizedCommand = currentCustomizedCommand.value
|
||||
@@ -133,23 +176,71 @@ function cancelChange(){
|
||||
currentEngine.value = engineControl.engine
|
||||
currentAudio.value = engineControl.audio
|
||||
currentTranslation.value = engineControl.translation
|
||||
|
||||
currentAPI_KEY.value = engineControl.API_KEY
|
||||
currentModelPath.value = engineControl.modelPath
|
||||
currentCustomized.value = engineControl.customized
|
||||
currentCustomizedApp.value = engineControl.customizedApp
|
||||
currentCustomizedCommand.value = engineControl.customizedCommand
|
||||
}
|
||||
|
||||
function selectFolderPath() {
|
||||
window.electron.ipcRenderer.invoke('control.folder.select').then((folderPath) => {
|
||||
if(!folderPath) return
|
||||
currentModelPath.value = folderPath
|
||||
})
|
||||
}
|
||||
|
||||
watch(changeSignal, (val) => {
|
||||
if(val == true) {
|
||||
cancelChange();
|
||||
engineControl.changeSignal = false;
|
||||
}
|
||||
})
|
||||
|
||||
watch(currentEngine, (val) => {
|
||||
if(val == 'vosk'){
|
||||
currentSourceLang.value = 'auto'
|
||||
currentTargetLang.value = ''
|
||||
}
|
||||
else if(val == 'gummy'){
|
||||
currentSourceLang.value = 'auto'
|
||||
currentTargetLang.value = useGeneralSettingStore().uiLanguage
|
||||
}
|
||||
})
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
@import url(../assets/input.css);
|
||||
|
||||
.label-hover-info {
|
||||
margin-top: 10px;
|
||||
max-width: min(36vw, 380px);
|
||||
}
|
||||
|
||||
.info-label {
|
||||
color: #1677ff;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.input-folder {
|
||||
display:inline-block;
|
||||
width: 40px;
|
||||
font-size:1.38em;
|
||||
cursor: pointer;
|
||||
transition: all 0.25s;
|
||||
}
|
||||
|
||||
.input-folder>span {
|
||||
padding: 0 4px;
|
||||
border: 2px solid #1677ff;
|
||||
color: #1677ff;
|
||||
border-radius: 30%;
|
||||
}
|
||||
|
||||
.input-folder:hover {
|
||||
transform: scale(1.1);
|
||||
}
|
||||
|
||||
.customize-note {
|
||||
padding: 10px 10px 0;
|
||||
color: red;
|
||||
|
||||
@@ -47,7 +47,7 @@
|
||||
<p class="about-desc">{{ $t('status.about.desc') }}</p>
|
||||
<a-divider />
|
||||
<div class="about-info">
|
||||
<p><b>{{ $t('status.about.version') }}</b><a-tag color="green">v0.2.0</a-tag></p>
|
||||
<p><b>{{ $t('status.about.version') }}</b><a-tag color="green">v0.4.0</a-tag></p>
|
||||
<p>
|
||||
<b>{{ $t('status.about.author') }}</b>
|
||||
<a
|
||||
@@ -106,6 +106,11 @@ function openCaptionWindow() {
|
||||
}
|
||||
|
||||
function startEngine() {
|
||||
console.log(`@@${engineControl.modelPath}##`)
|
||||
if(engineControl.engine === 'vosk' && engineControl.modelPath.trim() === '') {
|
||||
engineControl.emptyModelPathErr()
|
||||
return
|
||||
}
|
||||
window.electron.ipcRenderer.send('control.engine.start')
|
||||
}
|
||||
|
||||
|
||||
@@ -16,6 +16,13 @@ export const engines = {
|
||||
{ value: 'it', label: '意大利语' },
|
||||
]
|
||||
},
|
||||
{
|
||||
value: 'vosk',
|
||||
label: '本地 - Vosk',
|
||||
languages: [
|
||||
{ value: 'auto', label: '需要自行配置模型' },
|
||||
]
|
||||
}
|
||||
],
|
||||
en: [
|
||||
{
|
||||
@@ -34,6 +41,13 @@ export const engines = {
|
||||
{ value: 'it', label: 'Italian' },
|
||||
]
|
||||
},
|
||||
{
|
||||
value: 'vosk',
|
||||
label: 'Local - Vosk',
|
||||
languages: [
|
||||
{ value: 'auto', label: 'Model needs to be configured manually' },
|
||||
]
|
||||
}
|
||||
],
|
||||
ja: [
|
||||
{
|
||||
@@ -52,6 +66,13 @@ export const engines = {
|
||||
{ value: 'it', label: 'イタリア語' },
|
||||
]
|
||||
},
|
||||
{
|
||||
value: 'vosk',
|
||||
label: 'ローカル - Vosk',
|
||||
languages: [
|
||||
{ value: 'auto', label: 'モデルを手動で設定する必要があります' },
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
@@ -17,6 +17,8 @@ export default {
|
||||
"custom": "Type: Custom engine, engine path: ",
|
||||
"args": ", command arguments: ",
|
||||
"pidInfo": ", caption engine process PID: ",
|
||||
"empty": "Model Path is Empty",
|
||||
"emptyInfo": "The Vosk model path is empty. Please set the Vosk model path in the additional settings of the subtitle engine settings.",
|
||||
"stopped": "Caption Engine Stopped",
|
||||
"stoppedInfo": "The caption engine has stopped. You can click the 'Start Caption Engine' button to restart it.",
|
||||
"error": "An error occurred",
|
||||
@@ -46,6 +48,11 @@ export default {
|
||||
"systemOutput": "System Audio Output (Speaker)",
|
||||
"systemInput": "System Audio Input (Microphone)",
|
||||
"enableTranslation": "Translation",
|
||||
"showMore": "More Settings",
|
||||
"apikey": "API KEY",
|
||||
"modelPath": "Model Path",
|
||||
"apikeyInfo": "API KEY required for the Gummy subtitle engine, which needs to be obtained from the Alibaba Cloud Bailing platform. For more details, see the project user manual.",
|
||||
"modelPathInfo": "The folder path of the model required by the Vosk subtitle engine. You need to download the required model to your local machine in advance. For more details, see the project user manual.",
|
||||
"customEngine": "Custom Engine",
|
||||
custom: {
|
||||
"title": "Custom Caption Engine",
|
||||
@@ -64,6 +71,7 @@ export default {
|
||||
"fontFamily": "Font Family",
|
||||
"fontColor": "Font Color",
|
||||
"fontSize": "Font Size",
|
||||
"fontWeight": "Font Weight",
|
||||
"background": "Background",
|
||||
"opacity": "Opacity",
|
||||
"preview": "Preview",
|
||||
@@ -71,6 +79,14 @@ export default {
|
||||
trans: {
|
||||
"title": "Translation Style Settings",
|
||||
"useSame": "Use Original Style"
|
||||
},
|
||||
"textShadow": "Text Shadow",
|
||||
shadow: {
|
||||
"title": "Text Shadow Settings",
|
||||
"offsetX": "Offset X",
|
||||
"offsetY": "Offset Y",
|
||||
"blur": "Blur",
|
||||
"color": "Color"
|
||||
}
|
||||
},
|
||||
status: {
|
||||
@@ -94,11 +110,20 @@ export default {
|
||||
"projLink": "Project Link",
|
||||
"manual": "User Manual",
|
||||
"engineDoc": "Caption Engine Manual",
|
||||
"date": "July 5, 2026"
|
||||
"date": "July 11, 2026"
|
||||
}
|
||||
},
|
||||
log: {
|
||||
"title": "Caption Log",
|
||||
"copy": "Copy to Clipboard",
|
||||
"copyOptions": "Copy Options",
|
||||
"addIndex": "Add Index",
|
||||
"copyTime": "Copy Time",
|
||||
"copyContent": "Content",
|
||||
"both": "Original and Translation",
|
||||
"source": "Original Only",
|
||||
"translation": "Translation Only",
|
||||
"copySuccess": "Subtitle copied to clipboard",
|
||||
"export": "Export Caption Log",
|
||||
"clear": "Clear Caption Log"
|
||||
}
|
||||
|
||||
@@ -17,6 +17,8 @@ export default {
|
||||
"custom": "タイプ:カスタムエンジン、エンジンパス:",
|
||||
"args": "、コマンド引数:",
|
||||
"pidInfo": "、字幕エンジンプロセス PID:",
|
||||
"empty": "モデルパスが空です",
|
||||
"emptyInfo": "Vosk モデルのパスが空です。字幕エンジン設定の追加設定で Vosk モデルのパスを設定してください。",
|
||||
"stopped": "字幕エンジンが停止しました",
|
||||
"stoppedInfo": "字幕エンジンが停止しました。再起動するには「字幕エンジンを開始」ボタンをクリックしてください。",
|
||||
"error": "エラーが発生しました",
|
||||
@@ -46,6 +48,11 @@ export default {
|
||||
"systemOutput": "システムオーディオ出力(スピーカー)",
|
||||
"systemInput": "システムオーディオ入力(マイク)",
|
||||
"enableTranslation": "翻訳",
|
||||
"showMore": "詳細設定",
|
||||
"apikey": "API KEY",
|
||||
"modelPath": "モデルパス",
|
||||
"apikeyInfo": "Gummy 字幕エンジンに必要な API KEY は、アリババクラウド百煉プラットフォームから取得する必要があります。詳細情報はプロジェクトのユーザーマニュアルをご覧ください。",
|
||||
"modelPathInfo": "Vosk 字幕エンジンに必要なモデルのフォルダパスです。必要なモデルを事前にローカルマシンにダウンロードする必要があります。詳細情報はプロジェクトのユーザーマニュアルをご覧ください。",
|
||||
"customEngine": "カスタムエンジン",
|
||||
custom: {
|
||||
"title": "カスタムキャプションエンジン",
|
||||
@@ -64,6 +71,7 @@ export default {
|
||||
"fontFamily": "フォント",
|
||||
"fontColor": "カラー",
|
||||
"fontSize": "サイズ",
|
||||
"fontWeight": "文字の太さ",
|
||||
"background": "背景色",
|
||||
"opacity": "不透明度",
|
||||
"preview": "プレビュー",
|
||||
@@ -71,6 +79,14 @@ export default {
|
||||
trans: {
|
||||
"title": "翻訳スタイル設定",
|
||||
"useSame": "原文のスタイルを使用"
|
||||
},
|
||||
"textShadow": "文字影",
|
||||
shadow: {
|
||||
"title": "テキストの影設定",
|
||||
"offsetX": "Offset X",
|
||||
"offsetY": "Offset Y",
|
||||
"blur": "ぼかし半径",
|
||||
"color": "影の色"
|
||||
}
|
||||
},
|
||||
status: {
|
||||
@@ -94,11 +110,20 @@ export default {
|
||||
"projLink": "プロジェクトリンク",
|
||||
"manual": "ユーザーマニュアル",
|
||||
"engineDoc": "字幕エンジンマニュアル",
|
||||
"date": "2025 年 7 月 5 日"
|
||||
"date": "2025 年 7 月 11 日"
|
||||
}
|
||||
},
|
||||
log: {
|
||||
"title": "字幕ログ",
|
||||
"copy": "クリップボードにコピー",
|
||||
"copyOptions": "コピー設定",
|
||||
"addIndex": "順序番号",
|
||||
"copyTime": "時間",
|
||||
"copyContent": "内容",
|
||||
"both": "原文と翻訳",
|
||||
"source": "原文のみ",
|
||||
"translation": "翻訳のみ",
|
||||
"copySuccess": "字幕がクリップボードにコピーされました",
|
||||
"export": "エクスポート",
|
||||
"clear": "字幕ログをクリア"
|
||||
}
|
||||
|
||||
@@ -17,6 +17,8 @@ export default {
|
||||
"custom": "类型:自定义引擎,引擎路径:",
|
||||
"args": ",命令参数:",
|
||||
"pidInfo": ",字幕引擎进程 PID:",
|
||||
"empty": "模型路径为空",
|
||||
"emptyInfo": "Vosk 模型模型路径为空,请在字幕引擎设置的更多设置中设置 Vosk 模型的路径。",
|
||||
"stopped": "字幕引擎停止",
|
||||
"stoppedInfo": "字幕引擎已经停止,可点击“启动字幕引擎”按钮重新启动",
|
||||
"error": "发生错误",
|
||||
@@ -46,6 +48,11 @@ export default {
|
||||
"systemOutput": "系统音频输出(扬声器)",
|
||||
"systemInput": "系统音频输入(麦克风)",
|
||||
"enableTranslation": "启用翻译",
|
||||
"showMore": "更多设置",
|
||||
"apikey": "API KEY",
|
||||
"modelPath": "模型路径",
|
||||
"apikeyInfo": "Gummy 字幕引擎需要的 API KEY,需要在阿里云百炼平台获取。详细信息见项目用户手册。",
|
||||
"modelPathInfo": "Vosk 字幕引擎需要的模型的文件夹路径,需要提前下载需要的模型到本地。信息详情见项目用户手册。",
|
||||
"customEngine": "自定义引擎",
|
||||
custom: {
|
||||
"title": "自定义字幕引擎",
|
||||
@@ -64,6 +71,7 @@ export default {
|
||||
"fontFamily": "字体族",
|
||||
"fontColor": "字体颜色",
|
||||
"fontSize": "字体大小",
|
||||
"fontWeight": "字体粗细",
|
||||
"background": "背景颜色",
|
||||
"opacity": "不透明度",
|
||||
"preview": "显示预览",
|
||||
@@ -71,6 +79,14 @@ export default {
|
||||
trans: {
|
||||
"title": "翻译样式设置",
|
||||
"useSame": "使用原文样式"
|
||||
},
|
||||
"textShadow": "文本阴影",
|
||||
shadow: {
|
||||
"title": "文本阴影设置",
|
||||
"offsetX": "X轴偏移",
|
||||
"offsetY": "Y轴偏移",
|
||||
"blur": "模糊半径",
|
||||
"color": "阴影颜色"
|
||||
}
|
||||
},
|
||||
status: {
|
||||
@@ -94,12 +110,21 @@ export default {
|
||||
"projLink": "项目链接",
|
||||
"manual": "用户手册",
|
||||
"engineDoc": "字幕引擎手册",
|
||||
"date": "2025 年 7 月 5 日"
|
||||
"date": "2025 年 7 月 11 日"
|
||||
}
|
||||
},
|
||||
log: {
|
||||
"title": "字幕记录",
|
||||
"export": "导出字幕记录",
|
||||
"copy": "复制到剪贴板",
|
||||
"copyOptions": "复制选项",
|
||||
"addIndex": "添加序号",
|
||||
"copyTime": "复制时间",
|
||||
"copyContent": "复制内容",
|
||||
"both": "原文与翻译",
|
||||
"source": "仅原文",
|
||||
"translation": "仅翻译",
|
||||
"copySuccess": "字幕已复制到剪贴板",
|
||||
"clear": "清空字幕记录"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ export const useCaptionStyleStore = defineStore('captionStyle', () => {
|
||||
const fontFamily = ref<string>('sans-serif')
|
||||
const fontSize = ref<number>(24)
|
||||
const fontColor = ref<string>('#000000')
|
||||
const fontWeight = ref<number>(4)
|
||||
const background = ref<string>('#dbe2ef')
|
||||
const opacity = ref<number>(80)
|
||||
const showPreview = ref<boolean>(true)
|
||||
@@ -15,6 +16,12 @@ export const useCaptionStyleStore = defineStore('captionStyle', () => {
|
||||
const transFontFamily = ref<string>('sans-serif')
|
||||
const transFontSize = ref<number>(24)
|
||||
const transFontColor = ref<string>('#000000')
|
||||
const transFontWeight = ref<number>(4)
|
||||
const textShadow = ref<boolean>(false)
|
||||
const offsetX = ref<number>(2)
|
||||
const offsetY = ref<number>(2)
|
||||
const blur = ref<number>(0)
|
||||
const textShadowColor = ref<string>('#ffffff')
|
||||
|
||||
const iBreakOptions = ref(breakOptions['zh'])
|
||||
const changeSignal = ref<boolean>(false)
|
||||
@@ -35,13 +42,20 @@ export const useCaptionStyleStore = defineStore('captionStyle', () => {
|
||||
fontFamily: fontFamily.value,
|
||||
fontSize: fontSize.value,
|
||||
fontColor: fontColor.value,
|
||||
fontWeight: fontWeight.value,
|
||||
background: background.value,
|
||||
opacity: opacity.value,
|
||||
showPreview: showPreview.value,
|
||||
transDisplay: transDisplay.value,
|
||||
transFontFamily: transFontFamily.value,
|
||||
transFontSize: transFontSize.value,
|
||||
transFontColor: transFontColor.value
|
||||
transFontColor: transFontColor.value,
|
||||
transFontWeight: transFontWeight.value,
|
||||
textShadow: textShadow.value,
|
||||
offsetX: offsetX.value,
|
||||
offsetY: offsetY.value,
|
||||
blur: blur.value,
|
||||
textShadowColor: textShadowColor.value
|
||||
}
|
||||
window.electron.ipcRenderer.send('control.styles.change', styles)
|
||||
}
|
||||
@@ -55,13 +69,20 @@ export const useCaptionStyleStore = defineStore('captionStyle', () => {
|
||||
fontFamily.value = args.fontFamily
|
||||
fontSize.value = args.fontSize
|
||||
fontColor.value = args.fontColor
|
||||
fontWeight.value = args.fontWeight
|
||||
background.value = args.background
|
||||
opacity.value = args.opacity
|
||||
showPreview.value = args.showPreview
|
||||
transDisplay.value = args.transDisplay
|
||||
transFontFamily.value = args.transFontFamily
|
||||
transFontSize.value = args.transFontSize
|
||||
transFontColor.value = args.transFontColor
|
||||
transFontColor.value = args.transFontColor,
|
||||
transFontWeight.value = args.transFontWeight
|
||||
textShadow.value = args.textShadow
|
||||
offsetX.value = args.offsetX
|
||||
offsetY.value = args.offsetY
|
||||
blur.value = args.blur
|
||||
textShadowColor.value = args.textShadowColor
|
||||
changeSignal.value = true
|
||||
}
|
||||
|
||||
@@ -74,6 +95,7 @@ export const useCaptionStyleStore = defineStore('captionStyle', () => {
|
||||
fontFamily, // 字体族
|
||||
fontSize, // 字体大小
|
||||
fontColor, // 字体颜色
|
||||
fontWeight, // 字体粗细
|
||||
background, // 背景颜色
|
||||
opacity, // 背景透明度
|
||||
showPreview, // 是否显示预览
|
||||
@@ -81,6 +103,12 @@ export const useCaptionStyleStore = defineStore('captionStyle', () => {
|
||||
transFontFamily, // 翻译字体族
|
||||
transFontSize, // 翻译字体大小
|
||||
transFontColor, // 翻译字体颜色
|
||||
transFontWeight, // 翻译字体粗细
|
||||
textShadow, // 是否显示文本阴影
|
||||
offsetX, // 阴影X轴偏移
|
||||
offsetY, // 阴影Y轴偏移
|
||||
blur, // 阴影模糊度半径
|
||||
textShadowColor, // 阴影颜色
|
||||
backgroundRGBA, // 带透明度的背景颜色
|
||||
setStyles, // 设置样式
|
||||
sendStylesChange, // 发送样式改变
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { ref } from 'vue'
|
||||
import { ref, watch } from 'vue'
|
||||
import { defineStore } from 'pinia'
|
||||
|
||||
import { notification } from 'ant-design-vue'
|
||||
@@ -12,16 +12,18 @@ import { useGeneralSettingStore } from './generalSetting'
|
||||
|
||||
export const useEngineControlStore = defineStore('engineControl', () => {
|
||||
const { t } = useI18n()
|
||||
const platform = ref('unknown')
|
||||
|
||||
const captionEngine = ref(engines[useGeneralSettingStore().uiLanguage])
|
||||
const audioType = ref(audioTypes[useGeneralSettingStore().uiLanguage])
|
||||
|
||||
const engineEnabled = ref(false)
|
||||
const sourceLang = ref<string>('en')
|
||||
const targetLang = ref<string>('zh')
|
||||
const engine = ref<'gummy'>('gummy')
|
||||
const engine = ref<string>('gummy')
|
||||
const audio = ref<0 | 1>(0)
|
||||
const translation = ref<boolean>(true)
|
||||
const API_KEY = ref<string>('')
|
||||
const modelPath = ref<string>('')
|
||||
const customized = ref<boolean>(false)
|
||||
const customizedApp = ref<string>('')
|
||||
const customizedCommand = ref<string>('')
|
||||
@@ -36,6 +38,8 @@ export const useEngineControlStore = defineStore('engineControl', () => {
|
||||
engine: engine.value,
|
||||
audio: audio.value,
|
||||
translation: translation.value,
|
||||
API_KEY: API_KEY.value,
|
||||
modelPath: modelPath.value,
|
||||
customized: customized.value,
|
||||
customizedApp: customizedApp.value,
|
||||
customizedCommand: customizedCommand.value
|
||||
@@ -50,12 +54,21 @@ export const useEngineControlStore = defineStore('engineControl', () => {
|
||||
audio.value = controls.audio
|
||||
engineEnabled.value = controls.engineEnabled
|
||||
translation.value = controls.translation
|
||||
API_KEY.value = controls.API_KEY
|
||||
modelPath.value = controls.modelPath
|
||||
customized.value = controls.customized
|
||||
customizedApp.value = controls.customizedApp
|
||||
customizedCommand.value = controls.customizedCommand
|
||||
changeSignal.value = true
|
||||
}
|
||||
|
||||
function emptyModelPathErr() {
|
||||
notification.open({
|
||||
message: t('noti.empty'),
|
||||
description: t('noti.emptyInfo')
|
||||
});
|
||||
}
|
||||
|
||||
window.electron.ipcRenderer.on('control.controls.set', (_, controls: Controls) => {
|
||||
setControls(controls)
|
||||
})
|
||||
@@ -91,8 +104,15 @@ export const useEngineControlStore = defineStore('engineControl', () => {
|
||||
});
|
||||
})
|
||||
|
||||
watch(platform, (newValue) => {
|
||||
if(newValue !== 'win32' && newValue !== 'darwin') {
|
||||
audio.value = 1
|
||||
}
|
||||
})
|
||||
|
||||
return {
|
||||
captionEngine, // 字幕引擎
|
||||
platform, // 系统平台
|
||||
captionEngine, // 字幕引擎列表
|
||||
audioType, // 音频类型
|
||||
engineEnabled, // 字幕引擎是否启用
|
||||
sourceLang, // 源语言
|
||||
@@ -100,11 +120,14 @@ export const useEngineControlStore = defineStore('engineControl', () => {
|
||||
engine, // 字幕引擎
|
||||
audio, // 选择音频
|
||||
translation, // 是否启用翻译
|
||||
API_KEY, // API KEY
|
||||
modelPath, // vosk 模型路径
|
||||
customized, // 是否使用自定义字幕引擎
|
||||
customizedApp, // 自定义字幕引擎的应用程序
|
||||
customizedCommand, // 自定义字幕引擎的命令
|
||||
setControls, // 设置引擎配置
|
||||
sendControlsChange, // 发送最新控制消息到后端
|
||||
emptyModelPathErr, // 模型路径为空时显示警告
|
||||
changeSignal, // 配置改变信号
|
||||
}
|
||||
})
|
||||
|
||||
@@ -14,6 +14,11 @@ export const useGeneralSettingStore = defineStore('generalSetting', () => {
|
||||
|
||||
const antdTheme = ref<Object>(antDesignTheme['light'])
|
||||
|
||||
window.electron.ipcRenderer.invoke('control.nativeTheme.get').then((theme) => {
|
||||
if(theme === 'light') setLightTheme()
|
||||
else if(theme === 'dark') setDarkTheme()
|
||||
})
|
||||
|
||||
watch(uiLanguage, (newValue) => {
|
||||
i18n.global.locale.value = newValue
|
||||
useEngineControlStore().captionEngine = engines[newValue]
|
||||
|
||||
@@ -6,9 +6,11 @@ export interface Controls {
|
||||
engineEnabled: boolean,
|
||||
sourceLang: string,
|
||||
targetLang: string,
|
||||
engine: 'gummy',
|
||||
engine: string,
|
||||
audio: 0 | 1,
|
||||
translation: boolean,
|
||||
API_KEY: string,
|
||||
modelPath: string,
|
||||
customized: boolean,
|
||||
customizedApp: string,
|
||||
customizedCommand: string
|
||||
@@ -19,13 +21,20 @@ export interface Styles {
|
||||
fontFamily: string,
|
||||
fontSize: number,
|
||||
fontColor: string,
|
||||
fontWeight: number,
|
||||
background: string,
|
||||
opacity: number,
|
||||
showPreview: boolean,
|
||||
transDisplay: boolean,
|
||||
transFontFamily: string,
|
||||
transFontSize: number,
|
||||
transFontColor: string
|
||||
transFontColor: string,
|
||||
transFontWeight: number,
|
||||
textShadow: boolean,
|
||||
offsetX: number,
|
||||
offsetY: number,
|
||||
blur: number,
|
||||
textShadowColor: string
|
||||
}
|
||||
|
||||
export interface CaptionItem {
|
||||
@@ -37,6 +46,7 @@ export interface CaptionItem {
|
||||
}
|
||||
|
||||
export interface FullConfig {
|
||||
platform: string,
|
||||
uiLanguage: UILanguage,
|
||||
uiTheme: UITheme,
|
||||
leftBarWidth: number,
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
backgroundColor: captionStyle.backgroundRGBA
|
||||
}"
|
||||
>
|
||||
<div class="title-bar">
|
||||
<div class="title-bar" :style="{color: captionStyle.fontColor}">
|
||||
<div class="drag-area"> </div>
|
||||
<div class="option-item" @click="pinCaptionWindow">
|
||||
<PushpinFilled v-if="pinned" />
|
||||
@@ -19,11 +19,17 @@
|
||||
<CloseOutlined />
|
||||
</div>
|
||||
</div>
|
||||
<div class="caption-container">
|
||||
<div
|
||||
class="caption-container"
|
||||
:style="{
|
||||
textShadow: captionStyle.textShadow ? `${captionStyle.offsetX}px ${captionStyle.offsetY}px ${captionStyle.blur}px ${captionStyle.textShadowColor}` : 'none'
|
||||
}"
|
||||
>
|
||||
<p :class="[captionStyle.lineBreak?'':'left-ellipsis']" :style="{
|
||||
fontFamily: captionStyle.fontFamily,
|
||||
fontSize: captionStyle.fontSize + 'px',
|
||||
color: captionStyle.fontColor
|
||||
color: captionStyle.fontColor,
|
||||
fontWeight: captionStyle.fontWeight * 100
|
||||
}">
|
||||
<span v-if="captionData.length">{{ captionData[captionData.length-1].text }}</span>
|
||||
<span v-else>{{ $t('example.original') }}</span>
|
||||
@@ -33,7 +39,8 @@
|
||||
:style="{
|
||||
fontFamily: captionStyle.transFontFamily,
|
||||
fontSize: captionStyle.transFontSize + 'px',
|
||||
color: captionStyle.transFontColor
|
||||
color: captionStyle.transFontColor,
|
||||
fontWeight: captionStyle.transFontWeight * 100
|
||||
}">
|
||||
<span v-if="captionData.length">{{ captionData[captionData.length-1].translation }}</span>
|
||||
<span v-else>{{ $t('example.translation') }}</span>
|
||||
|
||||