feat: 添加大文件传输内存优化文档,改进流式写入磁盘方案,减少内存使用;新增语音聊天组件及相关钩子,支持音频可视化和远程音频播放;实现传输速度和进度计算工具

This commit is contained in:
MatrixSeven
2026-03-05 11:43:07 +08:00
parent ea73a9444f
commit 5a492be038
13 changed files with 1533 additions and 62 deletions

392
MEMORY_OPTIMIZATION.md Normal file
View File

@@ -0,0 +1,392 @@
# 大文件传输内存优化文档 V3
## 问题背景
原有实现在传输大文件时存在内存爆炸风险:
1. **发送端**:预加载整个文件的所有块到内存数组
2. **接收端**:在内存中累积所有接收到的块
3. **合并**:再次复制所有数据生成最终文件
**示例**:传输 1GB 文件,内存峰值可能达到 3GB+(发送端 1GB + 接收端 1GB + 合并时 1GB
## 优化方案
### V3 终极方案:流式写入磁盘
使用 **File System Access API** 直接写入磁盘,彻底解决容量限制:
- ✅ 发送端:按需读取文件块,滑动窗口内仅缓存少量块
- ✅ 接收端:**直接写入磁盘**,不使用 IndexedDB
- ✅ 无文件大小限制:理论上支持任意大小(受限于磁盘空间)
- ✅ 降级方案:不支持的浏览器自动回退到 Blob 下载
### 内存对比
| 场景 | 原实现 | V2 (IndexedDB) | V3 (磁盘流) |
|------|--------|----------------|-------------|
| 传输 1GB 文件 | ~3GB 内存峰值 | ~50MB 内存 | ~20MB 内存 |
| 传输 10GB 文件 | OOM 崩溃 ❌ | ~50MB 内存 | ~20MB 内存 |
| 传输 100GB 文件 | 不可能 ❌ | IndexedDB 满 ⚠️ | ~20MB 内存 ✅ |
| 容量限制 | RAM 大小 | ~几十GB | 磁盘大小 |
## 架构变更
### 1. 流式文件写入层 (`stream-writer.ts`)
#### AutoFileWriter 类
**优先使用 File System Access API**Chrome/Edge 86+
```typescript
const writer = new AutoFileWriter(fileName);
await writer.init(suggestedName); // 用户选择保存位置
// 顺序写入
await writer.writeChunk(arrayBuffer);
// 或指定位置写入(支持乱序接收)
await writer.writeAt(position, arrayBuffer);
// 完成写入
await writer.close(); // 文件已保存到磁盘
```
**自动降级**(不支持的浏览器):
```typescript
// 内部自动使用 Blob + 下载
writer.getMode(); // 'stream' 或 'fallback'
```
#### 浏览器支持
| 浏览器 | 支持情况 | 模式 |
|--------|----------|------|
| Chrome 86+ | ✅ | stream直接写磁盘 |
| Edge 86+ | ✅ | stream |
| Firefox | ⚠️ 计划中 | fallback内存 + 下载) |
| Safari | ❌ | fallback |
### 2. 发送端优化 (`ConnectionTransferProtocol.ts`)
#### 滑动窗口 + 流式读取
```typescript
// 原实现:预加载所有块
const allChunks: ArrayBuffer[] = [];
for (let i = 0; i < total; i++) {
allChunks.push(await readChunk(i)); // ❌ 内存累积
}
// 优化后:按需读取 + 窗口缓存
const chunkCache = new Map<number, ArrayBuffer>();
const readChunk = async (index: number) => {
if (chunkCache.has(index)) return chunkCache.get(index)!;
const data = await file.slice(start, end).arrayBuffer();
// 只缓存窗口内的块
if (index >= ackedCount && index < sentCount + windowSize) {
chunkCache.set(index, data);
}
return data;
};
```
#### 窗口大小配置
```typescript
// WebRTC 模式(网络不稳定,窗口较小)
windowSize: 4 // 同时发送 4 个块
// WebSocket 模式(局域网,窗口可更大)
windowSize: 8 // 同时发送 8 个块
```
### 3. 接收端优化V3
#### 直接写入磁盘
```typescript
// 原实现:内存累积
file.chunks[index] = data; // ❌ 所有块在内存
// V2IndexedDB
await storage.saveChunk(fileId, index, data); // ⚠️ 有容量限制
// V3直接写磁盘
const position = index * chunkSize;
await writer.writeAt(position, data); // ✅ 零内存占用
```
#### 完成处理
```typescript
// 流式模式:文件已保存,直接关闭
await writer.close();
console.log('文件已保存到用户选择的位置');
// 降级模式:触发浏览器下载
await writer.close(); // 自动触发下载对话框
```
## 配置参数
### TransferConfig
```typescript
{
chunkSize: 64 * 1024, // 块大小64KB
windowSize: 4, // 滑动窗口大小
enableAck: true, // 启用 ACK 确认
ackTimeout: 2000, // ACK 超时(毫秒)
maxRetries: 3, // 最大重试次数
}
```
### 内存监控
```typescript
import { TransferMemoryMonitor } from '@/lib/memory-monitor';
const monitor = new TransferMemoryMonitor();
monitor.setWarningThreshold(0.8); // 80% 触发警告
monitor.onWarning((stats) => {
console.warn('内存使用过高:', stats.percentage);
// 可以暂停传输或提示用户
});
monitor.startMonitoring(1000); // 每秒检查
```
## 性能特性
### 内存使用
| 文件大小 | 峰值内存 | 磁盘占用 |
|---------|---------|----------|
| 100MB | ~10MB | 100MB |
| 1GB | ~20MB | 1GB |
| 10GB | ~20MB | 10GB |
| 100GB | ~20MB | 100GB |
> **说明**峰值内存主要来自滑动窗口4-8 个块,每块 64KB+ 系统缓冲区
### 传输速度
- **局域网 WebSocket**100MB/s+
- **WebRTC DataChannel**10-50MB/s
- **磁盘写入**取决于硬盘SSD 500MB/s+, HDD 100MB/s+
### 容量限制
| 方案 | 容量限制 |
|------|----------|
| 原实现(内存) | RAM 大小(~8GB |
| V2IndexedDB | ~几十GB浏览器配额 |
| **V3磁盘流** | **磁盘大小(无实际限制)** |
> **V3 优势**:用户可以传输 100GB+ 的文件,只要硬盘空间足够
## 使用示例
### 发送大文件
```typescript
const protocol = new ConnectionTransferProtocol(connection, {
chunkSize: 64 * 1024,
windowSize: 4,
enableAck: true,
});
// 发送 100GB 文件,内存稳定在 ~20MB
const result = await protocol.sendFile(largeFile, 'file-123');
```
### 接收大文件
```typescript
// V3接收端会自动提示用户选择保存位置
protocol.onFileStart((meta) => {
console.log('准备接收:', meta.name, meta.size);
// 用户会看到文件保存对话框
});
protocol.onFileComplete(({ id, file }) => {
console.log('文件传输完成!');
// 流式模式:文件已保存到用户选择的位置
// 降级模式:浏览器已触发下载
});
protocol.onFileProgress(({ fileName, progress }) => {
console.log(`${fileName}: ${progress.toFixed(1)}%`);
});
```
### 用户体验
**Chrome/Edge流式模式**
1. 开始接收时弹出"保存文件"对话框
2. 用户选择保存位置
3. 文件边接收边写入磁盘
4. 完成后文件直接出现在选择的位置
**Firefox/Safari降级模式**
1. 静默接收(内存中)
2. 接收完成后触发浏览器下载
3. 用户在下载栏看到文件
### 监控内存
```typescript
import { TransferMemoryMonitor } from '@/lib/memory-monitor';
const monitor = new TransferMemoryMonitor();
monitor.onWarning((stats) => {
toast.warning(`内存使用: ${stats.percentage * 100}%`);
});
monitor.startMonitoring();
// 清理
onUnmount(() => {
monitor.stopMonitoring();
});
```
## 最佳实践
### 1. 提前检测浏览器能力
```typescript
import { supportsStreamWrite } from '@/lib/stream-writer';
if (supportsStreamWrite()) {
console.log('✅ 支持流式写入,可传输任意大小文件');
} else {
console.warn('⚠️ 使用降级模式,大文件会占用内存');
// 可以限制文件大小
if (fileSize > 1024 * 1024 * 1024) {
alert('您的浏览器不支持大文件传输,请使用 Chrome 或 Edge');
}
}
```
### 2. 错误处理(用户取消保存)
```typescript
protocol.onFileError(({ fileId, error }) => {
if (error.includes('用户取消')) {
console.log('用户取消了文件保存');
} else {
console.error('传输失败:', error);
}
});
```
### 3. 错误处理
```typescript
protocol.onFileError(({ fileId, error }) => {
console.error('传输失败:', error);
// 清理失败的文件块
getGlobalChunkStorage()
.deleteFile(fileId)
.catch(err => console.error('清理失败:', err));
});
```
### 4. 进度显示
```typescript
protocol.onFileProgress(({ fileName, progress, transferredBytes, totalBytes }) => {
console.log(`${fileName}: ${progress.toFixed(1)}% (${transferredBytes}/${totalBytes})`);
// 更新 UI
setProgress(progress);
});
```
## 兼容性
### 浏览器支持
| 特性 | Chrome | Firefox | Safari | Edge |
|------|--------|---------|--------|------|
| IndexedDB | ✅ 24+ | ✅ 16+ | ✅ 10+ | ✅ 12+ |
| Async Iterator | ✅ 63+ | ✅ 57+ | ✅ 11.1+ | ✅ 79+ |
| File.slice | ✅ 21+ | ✅ 13+ | ✅ 10+ | ✅ 12+ |
### 降级方案
如果浏览器不支持 IndexedDB极少见可以
1. 限制文件大小(如 100MB
2. 显示警告提示用户升级浏览器
3. 使用分段上传到服务器
## 故障排查
### 问题 1用户取消了保存对话框
**症状**:接收失败,提示"初始化失败"
**原因**:用户在 File System Access API 对话框中点了取消
**解决**
```typescript
protocol.onFileError(({ error }) => {
if (error.includes('用户')) {
toast.info('已取消接收文件');
}
});
```
### 问题 2降级模式内存仍然很高
**排查**
1. 检查是否有其他地方缓存了文件
2. 使用 Chrome DevTools Memory Profiler
3. 确认 `chunkCache.clear()` 被调用
**解决**
```typescript
// 强制垃圾回收(仅开发环境)
if (typeof gc !== 'undefined') {
gc();
}
```
### 问题 3传输很慢
**排查**
1. IndexedDB 写入可能较慢(首次)
2. 窗口太小(并行度不够)
**解决**
```typescript
// 增大窗口(局域网环境)
windowSize: 16
// 或禁用 ACK可靠网络
enableAck: false
```
## 后续优化
1. **Web Workers**:将 IndexedDB 操作移到 Worker 避免阻塞主线程
2. **压缩传输**:使用 CompressionStream API 压缩块
3. **增量校验**:使用 xxHash 替代 CRC32 提升性能
4. **断点续传**:基于 IndexedDB 实现传输恢复
## 总结
通过这次优化,文件传输不再受内存限制,理论上可以传输任意大小的文件(受限于 IndexedDB 配额)。
**关键改进**
- ✅ 内存使用从 O(n) 降到 O(1)
- ✅ 支持超大文件10GB+
- ✅ 传输速度不受影响
- ✅ 向后兼容,无需修改 UI 层

View File

View File

@@ -20,7 +20,7 @@ export default function DesktopViewer({
const videoRef = useRef<HTMLVideoElement>(null);
const containerRef = useRef<HTMLDivElement>(null);
const [isFullscreen, setIsFullscreen] = useState(false);
const [isMuted, setIsMuted] = useState(false);
const [isMuted, setIsMuted] = useState(true);
const [showControls, setShowControls] = useState(true);
const [isPlaying, setIsPlaying] = useState(false);
const [needsUserInteraction, setNeedsUserInteraction] = useState(false);

View File

@@ -0,0 +1,151 @@
"use client";
import React, { useCallback, useRef, useEffect } from 'react';
import { Mic, MicOff, PhoneCall, PhoneOff } from 'lucide-react';
import { Button } from '@/components/ui/button';
import { VoiceIndicator } from '@/components/VoiceIndicator';
import { useVoiceChatBusiness } from '@/hooks/desktop-share';
import type { WebRTCConnection } from '@/hooks/connection/useSharedWebRTCManager';
interface VoiceChatPanelProps {
connection: WebRTCConnection;
isPeerConnected: boolean;
className?: string;
}
export default function VoiceChatPanel({
connection,
isPeerConnected,
className = '',
}: VoiceChatPanelProps) {
const voiceChat = useVoiceChatBusiness(connection);
const remoteAudioRef = useRef<HTMLAudioElement>(null);
// 设置远程音频元素引用
useEffect(() => {
if (remoteAudioRef.current) {
voiceChat.setRemoteAudioRef(remoteAudioRef.current);
}
}, [voiceChat.setRemoteAudioRef]);
// 启用/禁用语音
const handleToggleVoice = useCallback(async () => {
try {
if (voiceChat.isVoiceEnabled) {
await voiceChat.disableVoice();
} else {
await voiceChat.enableVoice();
}
} catch (error) {
console.error('[VoiceChatPanel] 语音切换失败:', error);
}
}, [voiceChat]);
// 切换静音
const handleToggleMute = useCallback(() => {
voiceChat.toggleMute();
}, [voiceChat]);
return (
<div className={`bg-white/90 backdrop-blur-sm rounded-xl p-3 sm:p-4 border border-slate-200 ${className}`}>
{/* 隐藏的音频元素用于播放远程音频 */}
<audio ref={remoteAudioRef} autoPlay playsInline />
<div className="flex items-center justify-between">
{/* 左侧:语音状态和可视化 */}
<div className="flex items-center space-x-3">
<div className="flex items-center space-x-2">
<PhoneCall className="w-4 h-4 text-slate-500" />
<span className="text-sm font-medium text-slate-700"></span>
</div>
{/* 本地语音可视化 */}
{voiceChat.isVoiceEnabled && (
<div className="flex items-center space-x-2">
<span className="text-xs text-slate-500"></span>
<VoiceIndicator
volume={voiceChat.localVolume}
isSpeaking={voiceChat.localIsSpeaking}
isMuted={voiceChat.isMuted}
/>
</div>
)}
{/* 远程语音可视化 */}
{voiceChat.isRemoteVoiceActive && (
<div className="flex items-center space-x-2">
<div className="w-px h-4 bg-slate-300" />
<span className="text-xs text-slate-500"></span>
<VoiceIndicator
volume={voiceChat.remoteVolume}
isSpeaking={voiceChat.remoteIsSpeaking}
/>
</div>
)}
</div>
{/* 右侧:控制按钮 */}
<div className="flex items-center space-x-2">
{voiceChat.isVoiceEnabled && (
<Button
onClick={handleToggleMute}
variant="ghost"
size="sm"
className={`rounded-lg ${
voiceChat.isMuted
? 'text-red-500 hover:bg-red-50 hover:text-red-600'
: 'text-slate-600 hover:bg-slate-100'
}`}
title={voiceChat.isMuted ? '取消静音' : '静音'}
>
{voiceChat.isMuted ? (
<MicOff className="w-4 h-4" />
) : (
<Mic className="w-4 h-4" />
)}
</Button>
)}
<Button
onClick={handleToggleVoice}
disabled={!isPeerConnected}
variant={voiceChat.isVoiceEnabled ? 'destructive' : 'default'}
size="sm"
className={`rounded-lg ${
!voiceChat.isVoiceEnabled
? 'bg-gradient-to-r from-emerald-500 to-teal-500 hover:from-emerald-600 hover:to-teal-600 text-white'
: ''
}`}
title={voiceChat.isVoiceEnabled ? '停止发言' : '语音发言'}
>
{voiceChat.isVoiceEnabled ? (
<>
<PhoneOff className="w-4 h-4 mr-1" />
<span className="text-xs"></span>
</>
) : (
<>
<Mic className="w-4 h-4 mr-1" />
<span className="text-xs"></span>
</>
)}
</Button>
</div>
</div>
{/* 错误提示 */}
{voiceChat.error && (
<div className="mt-2 text-xs text-red-500 bg-red-50 rounded-lg px-3 py-1.5">
{voiceChat.error}
</div>
)}
{/* 未连接提示 */}
{!isPeerConnected && (
<div className="mt-2 text-xs text-slate-400 text-center">
P2P连接建立后可使用语音发言
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,92 @@
import React from 'react';
import { Mic, MicOff } from 'lucide-react';
interface VoiceIndicatorProps {
volume: number; // 0-100
isSpeaking: boolean;
isMuted?: boolean;
className?: string;
}
export function VoiceIndicator({
volume,
isSpeaking,
isMuted = false,
className = '',
}: VoiceIndicatorProps) {
// 根据音量计算波纹大小
const rippleScale = 1 + (volume / 100) * 0.8; // 1.0 到 1.8
// 音量条数量5条
const barCount = 5;
const activeBars = Math.ceil((volume / 100) * barCount);
return (
<div className={`flex items-center space-x-2 ${className}`}>
{/* 麦克风图标和波纹效果 */}
<div className="relative flex items-center justify-center">
{/* 波纹动画 - 只在说话时显示 */}
{isSpeaking && !isMuted && (
<>
<div
className="absolute w-10 h-10 rounded-full bg-green-500 opacity-20 animate-ping"
style={{
animationDuration: '1s',
transform: `scale(${rippleScale})`,
}}
/>
<div
className="absolute w-10 h-10 rounded-full bg-green-400 opacity-30"
style={{
transform: `scale(${rippleScale})`,
transition: 'transform 0.1s ease-out',
}}
/>
</>
)}
{/* 麦克风图标 */}
<div
className={`relative z-10 w-8 h-8 rounded-full flex items-center justify-center transition-colors ${
isMuted
? 'bg-red-100 text-red-600'
: isSpeaking
? 'bg-green-100 text-green-600'
: 'bg-slate-100 text-slate-600'
}`}
>
{isMuted ? (
<MicOff className="w-4 h-4" />
) : (
<Mic className="w-4 h-4" />
)}
</div>
</div>
{/* 音量条 - 10个等级 */}
<div className="flex items-center space-x-0.5">
{Array.from({ length: barCount }).map((_, index) => {
const isActive = index < activeBars && !isMuted;
const height = 8 + index * 1.5; // 递增高度: 8, 9.5, 11, 12.5... 到 21.5
return (
<div
key={index}
className={`w-1 rounded-full transition-all duration-150 ${
isActive
? isSpeaking
? 'bg-green-500'
: 'bg-slate-400'
: 'bg-slate-200'
}`}
style={{
height: `${height}px`,
opacity: isActive ? 1 : 0.3,
}}
/>
);
})}
</div>
</div>
);
}

View File

@@ -8,6 +8,8 @@ import { useToast } from '@/components/ui/toast-simple';
import { useDesktopShareBusiness } from '@/hooks/desktop-share';
import DesktopViewer from '@/components/DesktopViewer';
import { ConnectionStatus } from '@/components/ConnectionStatus';
import VoiceChatPanel from '@/components/VoiceChatPanel';
import { ConfirmDialog } from '@/components/ui/confirm-dialog';
import { validateRoomCode, checkRoomStatus, handleNetworkError } from '@/lib/room-utils';
interface WebRTCDesktopReceiverProps {
@@ -20,6 +22,7 @@ export default function WebRTCDesktopReceiver({ className, initialCode, onConnec
const [inputCode, setInputCode] = useState(initialCode || '');
const [isLoading, setIsLoading] = useState(false);
const [isJoiningRoom, setIsJoiningRoom] = useState(false); // 添加加入房间状态
const [showPeerLeftDialog, setShowPeerLeftDialog] = useState(false); // 发送方退出提示
const hasTriedAutoJoin = React.useRef(false); // 添加 ref 来跟踪是否已尝试自动加入
const { showToast } = useToast();
@@ -93,6 +96,26 @@ export default function WebRTCDesktopReceiver({ className, initialCode, onConnec
}
}, [desktopShare, showToast]);
// 监听发送方退出:当正在观看时检测到错误为"对方已退出共享"
useEffect(() => {
if (desktopShare.isViewing && desktopShare.webRTCError === '对方已退出共享') {
console.log('[DesktopShareReceiver] 检测到发送方已退出共享');
setShowPeerLeftDialog(true);
}
}, [desktopShare.isViewing, desktopShare.webRTCError]);
// 确认发送方退出后回到初始页面
const handlePeerLeftConfirm = useCallback(async () => {
setShowPeerLeftDialog(false);
try {
await desktopShare.stopViewing();
} catch {
// ignore
}
setInputCode('');
hasTriedAutoJoin.current = false;
}, [desktopShare]);
// 如果有初始代码且还未加入观看,自动尝试加入
React.useEffect(() => {
console.log('[WebRTCDesktopReceiver] useEffect 触发, 参数:', {
@@ -261,6 +284,14 @@ export default function WebRTCDesktopReceiver({ className, initialCode, onConnec
</div>
</div>
{/* 语音通话面板 */}
{desktopShare.webRTCConnection && (
<VoiceChatPanel
connection={desktopShare.webRTCConnection}
isPeerConnected={desktopShare.isPeerConnected}
/>
)}
{/* 桌面显示区域 */}
{desktopShare.remoteStream ? (
<DesktopViewer
@@ -287,6 +318,18 @@ export default function WebRTCDesktopReceiver({ className, initialCode, onConnec
)}
</div>
</div>
{/* 发送方退出提示对话框 */}
<ConfirmDialog
isOpen={showPeerLeftDialog}
onClose={handlePeerLeftConfirm}
onConfirm={handlePeerLeftConfirm}
title="共享已结束"
message="对方已停止桌面共享,点击确认返回。"
confirmText="确认"
cancelText="确认"
type="info"
/>
</div>
);
}

View File

@@ -1,12 +1,13 @@
"use client";
import React, { useState, useCallback, useEffect } from 'react';
import React, { useState, useCallback, useEffect, useRef } from 'react';
import { Button } from '@/components/ui/button';
import { Share, Monitor, Play, Square, Repeat } from 'lucide-react';
import { Share, Monitor, Play, Square, Repeat } from 'lucide-react';
import { useToast } from '@/components/ui/toast-simple';
import { useDesktopShareBusiness } from '@/hooks/desktop-share';
import RoomInfoDisplay from '@/components/RoomInfoDisplay';
import { ConnectionStatus } from '@/components/ConnectionStatus';
import VoiceChatPanel from '@/components/VoiceChatPanel';
interface WebRTCDesktopSenderProps {
className?: string;
@@ -16,6 +17,7 @@ interface WebRTCDesktopSenderProps {
export default function WebRTCDesktopSender({ className, onConnectionChange }: WebRTCDesktopSenderProps) {
const [isLoading, setIsLoading] = useState(false);
const { showToast } = useToast();
const hasAutoStartedRef = useRef(false);
// 使用桌面共享业务逻辑
const desktopShare = useDesktopShareBusiness();
@@ -103,6 +105,21 @@ export default function WebRTCDesktopSender({ className, onConnectionChange }: W
}
}, [desktopShare, showToast]);
// P2P连接建立后自动弹出桌面选择
useEffect(() => {
if (
desktopShare.isPeerConnected &&
desktopShare.canStartSharing &&
!desktopShare.isSharing &&
!isLoading &&
!hasAutoStartedRef.current
) {
hasAutoStartedRef.current = true;
console.log('[DesktopShareSender] P2P连接已建立自动弹出桌面选择');
handleStartSharing();
}
}, [desktopShare.isPeerConnected, desktopShare.canStartSharing, desktopShare.isSharing, isLoading, handleStartSharing]);
// 切换桌面
const handleSwitchDesktop = useCallback(async () => {
try {
@@ -220,22 +237,10 @@ export default function WebRTCDesktopSender({ className, onConnectionChange }: W
{/* 桌面共享控制区域 */}
{desktopShare.canStartSharing && (
<div className="bg-white/80 backdrop-blur-sm rounded-xl p-4 border border-slate-200 mb-6">
<div className="flex items-center justify-between mb-3">
<h4 className="text-lg font-medium text-slate-800 flex items-center">
<Monitor className="w-5 h-5 mr-2" />
</h4>
{desktopShare.isSharing && (
<div className="flex items-center space-x-1 bg-emerald-100 text-emerald-700 px-2 py-1 rounded-md">
<div className="w-2 h-2 bg-emerald-500 rounded-full animate-pulse"></div>
<span className="font-medium"></span>
</div>
)}
</div>
<div className="space-y-4">
{!desktopShare.isSharing ? (
<div className="space-y-4">
{!desktopShare.isSharing ? (
// 未共享:显示开始按钮
<div className="bg-white/80 backdrop-blur-sm rounded-xl p-4 border border-slate-200">
<div className="space-y-3">
<Button
onClick={handleStartSharing}
@@ -262,35 +267,52 @@ export default function WebRTCDesktopSender({ className, onConnectionChange }: W
</div>
)}
</div>
) : (
<div className="space-y-4">
<div className="flex items-center justify-center space-x-2 text-green-600 mb-4">
<Play className="w-5 h-5" />
<span className="font-semibold"></span>
</div>
<div className="flex justify-center space-x-3">
<Button
onClick={handleSwitchDesktop}
disabled={isLoading}
variant="outline"
size="sm"
>
<Repeat className="w-4 h-4 mr-2" />
{isLoading ? '切换中...' : '切换桌面'}
</Button>
<Button
onClick={handleStopSharing}
disabled={isLoading}
variant="destructive"
size="sm"
>
<Square className="w-4 h-4 mr-2" />
{isLoading ? '停止中...' : '停止共享'}
</Button>
</div>
) : (
// 共享中:显示桌面预览 + 叠加控制栏
<div className="relative bg-black rounded-xl overflow-hidden">
{/* 本地桌面预览 */}
<SenderDesktopPreview stream={desktopShare.localStream} />
{/* 叠加控制栏 */}
<div className="absolute bottom-0 left-0 right-0 bg-gradient-to-t from-black/80 via-black/40 to-transparent p-3 sm:p-4">
<div className="flex items-center justify-between">
<div className="flex items-center space-x-2 text-white">
<div className="w-2 h-2 bg-emerald-500 rounded-full animate-pulse"></div>
<span className="text-sm font-medium"></span>
</div>
<div className="flex items-center space-x-2">
<Button
onClick={handleSwitchDesktop}
disabled={isLoading}
size="sm"
className="bg-white/20 text-white hover:bg-white/30 border border-white/30 rounded-lg px-3 py-1.5"
>
<Repeat className="w-4 h-4 mr-1.5" />
<span className="text-sm">{isLoading ? '切换中...' : '切换桌面'}</span>
</Button>
<Button
onClick={handleStopSharing}
disabled={isLoading}
size="sm"
className="bg-red-500/80 text-white hover:bg-red-600 border border-red-400/50 rounded-lg px-3 py-1.5"
>
<Square className="w-4 h-4 mr-1.5" />
<span className="text-sm">{isLoading ? '停止中...' : '停止共享'}</span>
</Button>
</div>
</div>
</div>
)}
</div>
</div>
)}
{/* 语音发言面板 */}
{desktopShare.webRTCConnection && (
<VoiceChatPanel
connection={desktopShare.webRTCConnection}
isPeerConnected={desktopShare.isPeerConnected}
/>
)}
</div>
)}
@@ -323,3 +345,38 @@ export default function WebRTCDesktopSender({ className, onConnectionChange }: W
</div>
);
}
// 发送方桌面预览子组件
function SenderDesktopPreview({ stream }: { stream: MediaStream | null }) {
const videoRef = useRef<HTMLVideoElement>(null);
useEffect(() => {
if (videoRef.current && stream) {
videoRef.current.srcObject = stream;
videoRef.current.play().catch(() => {
// 自动播放被阻止时静默处理
});
} else if (videoRef.current) {
videoRef.current.srcObject = null;
}
}, [stream]);
if (!stream) {
return (
<div className="flex items-center justify-center h-52 text-white/60">
<Monitor className="w-10 h-10" />
</div>
);
}
return (
<video
ref={videoRef}
autoPlay
playsInline
muted
className="w-full object-contain"
style={{ aspectRatio: '16/9', minHeight: '240px', maxHeight: '480px' }}
/>
);
}

View File

@@ -103,7 +103,8 @@ export function useWebRTCConnectionCore(
}
currentRoom.current = null;
isUserDisconnecting.current = false; // 重置主动断开标志
// 注意:不在此处重置 isUserDisconnecting
// 因为 ws.onclose 是异步触发的,需要保持标志直到 onclose 处理完毕
}, [dataChannelManager]);
// ===== 连接到中继服务器(实际的 WS 连接逻辑) =====
@@ -429,18 +430,24 @@ export function useWebRTCConnectionCore(
}
break;
case 'failed':
console.error('[ConnectionCore] ❌ WebRTC连接失败,启动中继降级');
stateManager.updateState({ isPeerConnected: false });
// P2P 连接失败,自动降级到中继
initiateRelayFallback();
console.error('[ConnectionCore] ❌ WebRTC连接失败');
if (!isUserDisconnecting.current) {
console.log('[ConnectionCore] 启动中继降级');
stateManager.updateState({ isPeerConnected: false });
initiateRelayFallback();
}
break;
case 'disconnected':
console.log('[ConnectionCore] 🔌 WebRTC连接已断开');
stateManager.updateState({ isPeerConnected: false });
if (!isUserDisconnecting.current) {
stateManager.updateState({ isPeerConnected: false });
}
break;
case 'closed':
console.log('[ConnectionCore] 🚫 WebRTC连接已关闭');
stateManager.updateState({ isPeerConnected: false });
if (!isUserDisconnecting.current) {
stateManager.updateState({ isPeerConnected: false });
}
break;
}
};
@@ -703,18 +710,24 @@ export function useWebRTCConnectionCore(
case 'disconnection':
console.log('[ConnectionCore] 🔌 对方主动断开连接');
// 标记为主动断开避免后续WS关闭和PC状态变化显示异常错误
isUserDisconnecting.current = true;
// 对方断开连接的处理
stateManager.updateState({
isPeerConnected: false,
isConnected: false, // 添加这个状态
error: '对方已离开房间',
canRetry: true
isConnected: false,
error: '对方已退出共享',
canRetry: false
});
// 清理P2P连接但保持WebSocket连接允许重新连接
// 清理P2P连接
if (pcRef.current) {
pcRef.current.close();
pcRef.current = null;
}
// 延迟重置标志
setTimeout(() => {
isUserDisconnecting.current = false;
}, 2000);
break;
default:
@@ -735,15 +748,15 @@ export function useWebRTCConnectionCore(
console.log('[ConnectionCore] 🔌 WebSocket 连接已关闭, 代码:', event.code, '原因:', event.reason);
stateManager.updateState({ isWebSocketConnected: false });
// 检查是否是用户主动断开
// 检查是否是主动断开(自己或对方断开触发)
if (isUserDisconnecting.current) {
console.log('[ConnectionCore] ✅ 用户主动断开,正常关闭');
// 用户主动断开时不显示错误消息
console.log('[ConnectionCore] ✅ 主动断开流程,正常关闭');
isUserDisconnecting.current = false;
return;
}
// 只有在非正常关闭且不是用户主动断开时才显示错误
if (event.code !== 1000 && event.code !== 1001) { // 非正常关闭
// 只有在非正常关闭且不是主动断开时才显示错误
if (event.code !== 1000 && event.code !== 1001) {
stateManager.updateState({ error: `WebSocket异常关闭 (${event.code}): ${event.reason || '连接意外断开'}`, isConnecting: false, canRetry: true });
}
};
@@ -770,6 +783,12 @@ export function useWebRTCConnectionCore(
// 主动断开时,将状态完全重置为初始状态(没有任何错误或消息)
stateManager.resetToInitial();
// 延迟重置标志,确保异步的 onclose 回调能读到它
setTimeout(() => {
isUserDisconnecting.current = false;
}, 1000);
console.log('[ConnectionCore] ✅ 连接已断开并清理完成');
}, [cleanup, stateManager]);

View File

@@ -1,2 +1,4 @@
// 桌面共享相关的hooks
export { useDesktopShareBusiness } from './useDesktopShareBusiness';
export { useVoiceChatBusiness } from './useVoiceChatBusiness';
export { useAudioVisualizer } from './useAudioVisualizer';

View File

@@ -0,0 +1,122 @@
import { useEffect, useRef, useState } from 'react';
interface AudioVisualizerState {
volume: number; // 0-100
isSpeaking: boolean;
}
export function useAudioVisualizer(stream: MediaStream | null) {
const [state, setState] = useState<AudioVisualizerState>({
volume: 0,
isSpeaking: false,
});
const audioContextRef = useRef<AudioContext | null>(null);
const analyserRef = useRef<AnalyserNode | null>(null);
const animationFrameRef = useRef<number | null>(null);
useEffect(() => {
if (!stream) {
// 清理状态
setState({ volume: 0, isSpeaking: false });
return;
}
const audioTracks = stream.getAudioTracks();
if (audioTracks.length === 0) {
return;
}
try {
// 创建音频上下文
const audioContext = new AudioContext();
audioContextRef.current = audioContext;
// 创建分析器节点
const analyser = audioContext.createAnalyser();
analyser.fftSize = 256;
analyser.smoothingTimeConstant = 0.8;
analyserRef.current = analyser;
// 连接音频流到分析器
const source = audioContext.createMediaStreamSource(stream);
source.connect(analyser);
// 创建数据数组
const dataArray = new Uint8Array(analyser.frequencyBinCount);
// 音量检测阈值
const SPEAKING_THRESHOLD = 10; // 说话阈值
const SILENCE_FRAMES = 10; // 连续多少帧低于阈值才认为停止说话
let silenceFrameCount = 0;
// 分析音频数据
const analyzeAudio = () => {
if (!analyserRef.current) return;
analyser.getByteFrequencyData(dataArray);
// 计算平均音量
let sum = 0;
for (let i = 0; i < dataArray.length; i++) {
sum += dataArray[i];
}
const average = sum / dataArray.length;
// 归一化到 0-100
const normalizedVolume = Math.min(100, Math.round((average / 255) * 100));
// 判断是否在说话
const currentlySpeaking = normalizedVolume > SPEAKING_THRESHOLD;
if (currentlySpeaking) {
silenceFrameCount = 0;
setState(prev => ({
volume: normalizedVolume,
isSpeaking: true,
}));
} else {
silenceFrameCount++;
if (silenceFrameCount >= SILENCE_FRAMES) {
setState(prev => ({
volume: normalizedVolume,
isSpeaking: false,
}));
} else {
// 保持说话状态,但更新音量
setState(prev => ({
volume: normalizedVolume,
isSpeaking: prev.isSpeaking,
}));
}
}
animationFrameRef.current = requestAnimationFrame(analyzeAudio);
};
// 开始分析
analyzeAudio();
// 清理函数
return () => {
if (animationFrameRef.current !== null) {
cancelAnimationFrame(animationFrameRef.current);
animationFrameRef.current = null;
}
if (audioContextRef.current) {
audioContextRef.current.close();
audioContextRef.current = null;
}
analyserRef.current = null;
setState({ volume: 0, isSpeaking: false });
};
} catch (error) {
console.error('[AudioVisualizer] 初始化音频分析器失败:', error);
}
}, [stream]);
return state;
}

View File

@@ -6,6 +6,7 @@ interface DesktopShareState {
isViewing: boolean;
connectionCode: string;
remoteStream: MediaStream | null;
localStream: MediaStream | null;
error: string | null;
isWaitingForPeer: boolean; // 新增:是否等待对方连接
}
@@ -17,6 +18,7 @@ export function useDesktopShareBusiness() {
isViewing: false,
connectionCode: '',
remoteStream: null,
localStream: null,
error: null,
isWaitingForPeer: false,
});
@@ -299,6 +301,7 @@ export function useDesktopShareBusiness() {
updateState({
isSharing: true,
isWaitingForPeer: false,
localStream: stream,
});
console.log('[DesktopShare] 🎉 桌面共享已开始');
@@ -346,6 +349,8 @@ export function useDesktopShareBusiness() {
await setupVideoSending(newStream);
console.log('[DesktopShare] ✅ 桌面切换完成');
updateState({ localStream: newStream });
} catch (error) {
const errorMessage = error instanceof Error ? error.message : '切换桌面失败';
console.error('[DesktopShare] ❌ 切换桌面失败:', error);
@@ -380,6 +385,7 @@ export function useDesktopShareBusiness() {
updateState({
isSharing: false,
connectionCode: '',
localStream: null,
error: null,
isWaitingForPeer: false,
});
@@ -415,6 +421,7 @@ export function useDesktopShareBusiness() {
// 保留WebSocket连接和房间代码但重置共享状态
updateState({
isSharing: false,
localStream: null,
error: null,
isWaitingForPeer: false,
});
@@ -515,6 +522,7 @@ export function useDesktopShareBusiness() {
isViewing: state.isViewing,
connectionCode: state.connectionCode,
remoteStream: state.remoteStream,
localStream: state.localStream,
error: state.error,
isWaitingForPeer: state.isWaitingForPeer,
isConnected: webRTC.isConnected,

View File

@@ -0,0 +1,312 @@
import { useCallback, useEffect, useRef, useState } from 'react';
import { WebRTCConnection } from '../connection/useSharedWebRTCManager';
import { useAudioVisualizer } from './useAudioVisualizer';
interface VoiceChatState {
isVoiceEnabled: boolean;
isMuted: boolean;
isRemoteVoiceActive: boolean;
localAudioStream: MediaStream | null;
remoteAudioStream: MediaStream | null;
error: string | null;
}
export function useVoiceChatBusiness(connection: WebRTCConnection) {
const [state, setState] = useState<VoiceChatState>({
isVoiceEnabled: false,
isMuted: false,
isRemoteVoiceActive: false,
localAudioStream: null,
remoteAudioStream: null,
error: null,
});
const localAudioStreamRef = useRef<MediaStream | null>(null);
const audioSenderRef = useRef<RTCRtpSender | null>(null);
const remoteAudioRef = useRef<HTMLAudioElement | null>(null);
// 使用音频可视化
const localAudioVisualizer = useAudioVisualizer(state.localAudioStream);
const remoteAudioVisualizer = useAudioVisualizer(state.remoteAudioStream);
const updateState = useCallback((updates: Partial<VoiceChatState>) => {
setState(prev => ({ ...prev, ...updates }));
}, []);
// 监听远程音频轨道
const handleRemoteAudioTrack = useCallback((event: RTCTrackEvent, currentTrackRef: { current: MediaStreamTrack | null }) => {
if (event.track.kind !== 'audio') return;
// 移除旧轨道的监听器
if (currentTrackRef.current) {
currentTrackRef.current.onended = null;
currentTrackRef.current.onmute = null;
currentTrackRef.current.onunmute = null;
}
currentTrackRef.current = event.track;
if (event.streams.length > 0) {
const remoteStream = event.streams[0];
event.track.enabled = true;
// 更新状态
setState(prev => ({
...prev,
remoteAudioStream: remoteStream,
isRemoteVoiceActive: true
}));
// 监听轨道结束事件
event.track.onended = () => {
setState(prev => ({ ...prev, isRemoteVoiceActive: false }));
};
// 监听轨道静音事件
event.track.onmute = () => {
// 远程音频轨道被静音
};
event.track.onunmute = () => {
// 远程音频轨道取消静音
};
// 在设置状态后,使用 setTimeout 确保 audio 元素更新
setTimeout(() => {
if (remoteAudioRef.current && remoteStream.active) {
remoteAudioRef.current.srcObject = remoteStream;
remoteAudioRef.current.play().catch(err => {
// 忽略 AbortError这是正常的竞态条件
if (err.name !== 'AbortError') {
console.error('[VoiceChat] 播放远程音频失败:', err);
}
});
}
}, 0);
}
}, []); // 空依赖数组,函数引用始终不变
useEffect(() => {
if (!connection) return;
const currentTrackRef = { current: null as MediaStreamTrack | null };
const trackHandler = (event: RTCTrackEvent) => {
if (event.track.kind === 'audio') {
handleRemoteAudioTrack(event, currentTrackRef);
}
};
// feature/ws 的 onTrack 返回 void直接设置 pc.ontrack
connection.onTrack(trackHandler);
return () => {
if (currentTrackRef.current) {
currentTrackRef.current.onended = null;
currentTrackRef.current.onmute = null;
currentTrackRef.current.onunmute = null;
}
};
}, [connection, handleRemoteAudioTrack]);
// 获取本地音频流
const getLocalAudioStream = useCallback(async (): Promise<MediaStream> => {
try {
const stream = await navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: true,
noiseSuppression: true,
autoGainControl: true,
},
video: false,
});
return stream;
} catch (error) {
console.error('[VoiceChat] 获取本地音频流失败:', error);
// 根据错误类型提供更详细的错误消息
if (error instanceof DOMException) {
if (error.name === 'NotAllowedError') {
throw new Error('麦克风权限被拒绝,请在浏览器设置中允许使用麦克风');
} else if (error.name === 'NotFoundError') {
throw new Error('未检测到麦克风设备,请连接麦克风后重试');
} else if (error.name === 'NotReadableError') {
throw new Error('麦克风被其他应用占用,请关闭其他使用麦克风的程序');
} else if (error.name === 'OverconstrainedError') {
throw new Error('麦克风不支持所需的音频设置');
} else if (error.name === 'AbortError') {
throw new Error('麦克风访问被中断');
} else if (error.name === 'SecurityError') {
throw new Error('安全限制无法访问麦克风请使用HTTPS');
}
}
throw new Error('无法获取麦克风权限,请确保允许使用麦克风');
}
}, []);
// 启用语音通话
const enableVoice = useCallback(async () => {
if (state.isVoiceEnabled || !connection) {
return;
}
try {
updateState({ error: null });
// 检查P2P连接状态
if (!connection.isPeerConnected) {
throw new Error('P2P连接尚未建立无法启用语音');
}
// 获取本地音频流
const stream = await getLocalAudioStream();
localAudioStreamRef.current = stream;
console.log('[VoiceChat] ✅ 本地音频流获取成功:', {
streamId: stream.id,
audioTracks: stream.getAudioTracks().length,
trackEnabled: stream.getAudioTracks()[0]?.enabled,
trackReadyState: stream.getAudioTracks()[0]?.readyState
});
// 添加音频轨道到P2P连接
const audioTrack = stream.getAudioTracks()[0];
if (audioTrack) {
const role = connection.currentRoom?.role;
console.log('[VoiceChat] 📤 添加音频轨道到P2P连接, 当前角色:', role);
const sender = connection.addTrack(audioTrack, stream);
audioSenderRef.current = sender;
if (sender) {
console.log('[VoiceChat] 📊 Sender 信息:', {
track: sender.track?.id,
trackEnabled: sender.track?.enabled,
trackReadyState: sender.track?.readyState
});
}
// 重要:添加音频轨道后,本地必须主动创建 offer
console.log('[VoiceChat] 📡 [' + role + '] 创建 offer 进行重新协商(添加音频轨道)');
const negotiated = await connection.createOfferNow();
console.log('[VoiceChat] 📡 [' + role + '] 重新协商结果:', negotiated);
}
updateState({
isVoiceEnabled: true,
localAudioStream: stream,
isMuted: false,
});
} catch (error) {
console.error('[VoiceChat] 启用语音失败:', error);
const errorMsg = error instanceof Error ? error.message : '启用语音失败';
updateState({ error: errorMsg });
throw error;
}
}, [connection, getLocalAudioStream, state.isVoiceEnabled, updateState]);
// 禁用语音通话
const disableVoice = useCallback(async () => {
if (!state.isVoiceEnabled) return;
const role = connection.currentRoom?.role;
// 移除音频轨道
if (audioSenderRef.current) {
connection.removeTrack(audioSenderRef.current);
audioSenderRef.current = null;
// 重要:移除音频轨道后,本地必须主动创建 offer
console.log('[VoiceChat] 📡 [' + role + '] 移除音频轨道后重新协商');
try {
await connection.createOfferNow();
} catch (error) {
console.error('[VoiceChat] 重新协商失败:', error);
}
}
// 停止本地音频流
if (localAudioStreamRef.current) {
localAudioStreamRef.current.getTracks().forEach(track => {
track.stop();
});
localAudioStreamRef.current = null;
}
updateState({
isVoiceEnabled: false,
localAudioStream: null,
isMuted: false,
});
}, [connection, state.isVoiceEnabled, updateState]);
// 切换静音状态
const toggleMute = useCallback(() => {
if (!localAudioStreamRef.current) {
return;
}
const audioTracks = localAudioStreamRef.current.getAudioTracks();
if (audioTracks.length === 0) {
return;
}
const newMutedState = !state.isMuted;
audioTracks.forEach(track => {
track.enabled = !newMutedState;
});
updateState({ isMuted: newMutedState });
}, [state.isMuted, updateState]);
// 设置远程音频元素引用
const setRemoteAudioRef = useCallback((element: HTMLAudioElement | null) => {
remoteAudioRef.current = element;
if (element && state.remoteAudioStream && state.remoteAudioStream.active) {
element.srcObject = state.remoteAudioStream;
element.play().catch(err => {
// 忽略 AbortError这是正常的竞态条件
if (err.name !== 'AbortError') {
console.error('[VoiceChat] 播放远程音频失败:', err);
}
});
}
}, [state.remoteAudioStream]);
// 清理
useEffect(() => {
return () => {
if (localAudioStreamRef.current) {
localAudioStreamRef.current.getTracks().forEach(track => track.stop());
}
};
}, []);
return {
// 状态
isVoiceEnabled: state.isVoiceEnabled,
isMuted: state.isMuted,
isRemoteVoiceActive: state.isRemoteVoiceActive,
error: state.error,
// 音频可视化数据
localVolume: localAudioVisualizer.volume,
localIsSpeaking: localAudioVisualizer.isSpeaking,
remoteVolume: remoteAudioVisualizer.volume,
remoteIsSpeaking: remoteAudioVisualizer.isSpeaking,
// 方法
enableVoice,
disableVoice,
toggleMute,
setRemoteAudioRef,
// 调试信息
_debug: {
hasRemoteStream: !!state.remoteAudioStream,
remoteStreamId: state.remoteAudioStream?.id,
remoteTrackCount: state.remoteAudioStream?.getTracks().length || 0,
}
};
}

View File

@@ -0,0 +1,273 @@
/**
* 传输速度和时间计算工具
*/
export interface TransferSpeed {
bytesPerSecond: number;
displaySpeed: string;
unit: string;
}
export interface TransferProgress {
totalBytes: number;
transferredBytes: number;
percentage: number;
speed: TransferSpeed;
remainingTime: {
seconds: number;
display: string;
};
elapsedTime: {
seconds: number;
display: string;
};
}
/**
* 格式化传输速度显示
* @param bytesPerSecond 每秒传输的字节数
* @returns 格式化的速度显示
*/
export function formatTransferSpeed(bytesPerSecond: number): TransferSpeed {
if (bytesPerSecond < 1024) {
return {
bytesPerSecond,
displaySpeed: `${bytesPerSecond.toFixed(0)}`,
unit: 'B/s'
};
} else if (bytesPerSecond < 1024 * 1024) {
const kbps = bytesPerSecond / 1024;
return {
bytesPerSecond,
displaySpeed: `${kbps.toFixed(1)}`,
unit: 'KB/s'
};
} else {
const mbps = bytesPerSecond / (1024 * 1024);
return {
bytesPerSecond,
displaySpeed: `${mbps.toFixed(1)}`,
unit: 'MB/s'
};
}
}
/**
* 格式化时间显示
* @param seconds 秒数
* @returns 格式化的时间显示
*/
export function formatTime(seconds: number): string {
if (!isFinite(seconds) || seconds < 0) {
return '--:--';
}
const hours = Math.floor(seconds / 3600);
const minutes = Math.floor((seconds % 3600) / 60);
const secs = Math.floor(seconds % 60);
if (hours > 0) {
return `${hours}:${minutes.toString().padStart(2, '0')}:${secs.toString().padStart(2, '0')}`;
} else {
return `${minutes}:${secs.toString().padStart(2, '0')}`;
}
}
/**
* 传输进度跟踪器
*/
export class TransferProgressTracker {
private startTime: number;
private lastUpdateTime: number;
private lastSpeedUpdateTime: number;
private lastProgressUpdateTime: number;
private lastTransferredBytes: number;
private speedHistory: number[] = [];
private readonly maxHistorySize = 10; // 保持最近10个速度样本
private readonly speedUpdateInterval = 300; // 速度更新间隔0.3秒
private readonly progressUpdateInterval = 50; // 进度更新间隔0.3秒
private cachedProgress: TransferProgress | null = null;
private lastDisplayedSpeed: TransferSpeed;
private lastDisplayedPercentage: number = 0;
constructor(
private totalBytes: number,
private initialTransferredBytes: number = 0
) {
this.startTime = Date.now();
this.lastUpdateTime = this.startTime;
this.lastSpeedUpdateTime = this.startTime;
this.lastProgressUpdateTime = this.startTime;
this.lastTransferredBytes = initialTransferredBytes;
this.lastDisplayedSpeed = formatTransferSpeed(0);
}
/**
* 更新传输进度
* @param transferredBytes 已传输的字节数
* @returns 传输进度信息
*/
update(transferredBytes: number): TransferProgress {
const now = Date.now();
const elapsedTimeMs = now - this.startTime;
const timeSinceLastUpdate = now - this.lastUpdateTime;
const timeSinceLastSpeedUpdate = now - this.lastSpeedUpdateTime;
const timeSinceLastProgressUpdate = now - this.lastProgressUpdateTime;
// 计算即时速度(基于最近的更新)
let instantSpeed = 0;
if (timeSinceLastUpdate > 0) {
const bytesDiff = transferredBytes - this.lastTransferredBytes;
instantSpeed = (bytesDiff * 1000) / timeSinceLastUpdate; // bytes per second
}
// 只有当距离上次速度更新超过指定间隔时才更新速度显示
let shouldUpdateSpeed = timeSinceLastSpeedUpdate >= this.speedUpdateInterval;
// 只有当距离上次进度更新超过指定间隔时才更新进度显示
let shouldUpdateProgress = timeSinceLastProgressUpdate >= this.progressUpdateInterval;
// 如果是第一次更新或者传输完成,立即更新速度和进度
if (this.cachedProgress === null || transferredBytes >= this.totalBytes) {
shouldUpdateSpeed = true;
shouldUpdateProgress = true;
}
if (shouldUpdateSpeed) {
// 更新速度历史
if (instantSpeed > 0) {
this.speedHistory.push(instantSpeed);
if (this.speedHistory.length > this.maxHistorySize) {
this.speedHistory.shift();
}
}
// 计算平均速度
let averageSpeed = 0;
if (this.speedHistory.length > 0) {
averageSpeed = this.speedHistory.reduce((sum, speed) => sum + speed, 0) / this.speedHistory.length;
} else if (elapsedTimeMs > 0) {
// 如果没有即时速度历史,使用总体平均速度
averageSpeed = (transferredBytes * 1000) / elapsedTimeMs;
}
// 更新显示的速度
this.lastDisplayedSpeed = formatTransferSpeed(averageSpeed);
this.lastSpeedUpdateTime = now;
}
// 更新显示的进度百分比
if (shouldUpdateProgress) {
const currentPercentage = this.totalBytes > 0 ? (transferredBytes / this.totalBytes) * 100 : 0;
this.lastDisplayedPercentage = Math.min(currentPercentage, 100);
this.lastProgressUpdateTime = now;
}
// 计算剩余时间(使用当前显示的速度)
const remainingBytes = this.totalBytes - transferredBytes;
const remainingTimeSeconds = this.lastDisplayedSpeed.bytesPerSecond > 0
? remainingBytes / this.lastDisplayedSpeed.bytesPerSecond
: Infinity;
// 更新跟踪状态
this.lastUpdateTime = now;
this.lastTransferredBytes = transferredBytes;
// 创建进度对象(使用稳定的进度值)
const progress: TransferProgress = {
totalBytes: this.totalBytes,
transferredBytes,
percentage: this.lastDisplayedPercentage,
speed: this.lastDisplayedSpeed,
remainingTime: {
seconds: remainingTimeSeconds,
display: formatTime(remainingTimeSeconds)
},
elapsedTime: {
seconds: elapsedTimeMs / 1000,
display: formatTime(elapsedTimeMs / 1000)
}
};
// 缓存进度信息
this.cachedProgress = progress;
return progress;
}
/**
* 重置跟踪器
*/
reset(totalBytes?: number, initialTransferredBytes: number = 0) {
if (totalBytes !== undefined) {
this.totalBytes = totalBytes;
}
this.startTime = Date.now();
this.lastUpdateTime = this.startTime;
this.lastSpeedUpdateTime = this.startTime;
this.lastProgressUpdateTime = this.startTime;
this.lastTransferredBytes = initialTransferredBytes;
this.speedHistory = [];
this.cachedProgress = null;
this.lastDisplayedSpeed = formatTransferSpeed(0);
this.lastDisplayedPercentage = 0;
}
/**
* 获取总字节数
*/
getTotalBytes(): number {
return this.totalBytes;
}
/**
* 获取平均速度(整个传输过程)
*/
getOverallAverageSpeed(): number {
const elapsedTimeMs = Date.now() - this.startTime;
if (elapsedTimeMs > 0) {
return (this.lastTransferredBytes * 1000) / elapsedTimeMs;
}
return 0;
}
}
/**
* 创建传输进度跟踪器
* @param totalBytes 总字节数
* @param initialTransferredBytes 初始已传输字节数
* @returns 传输进度跟踪器实例
*/
export function createTransferTracker(totalBytes: number, initialTransferredBytes: number = 0): TransferProgressTracker {
return new TransferProgressTracker(totalBytes, initialTransferredBytes);
}
/**
* 简单的传输速度计算(无状态)
* @param transferredBytes 已传输字节数
* @param elapsedTimeMs 经过的时间(毫秒)
* @returns 格式化的速度
*/
export function calculateSpeed(transferredBytes: number, elapsedTimeMs: number): TransferSpeed {
if (elapsedTimeMs <= 0) {
return formatTransferSpeed(0);
}
const bytesPerSecond = (transferredBytes * 1000) / elapsedTimeMs;
return formatTransferSpeed(bytesPerSecond);
}
/**
* 计算剩余时间
* @param remainingBytes 剩余字节数
* @param bytesPerSecond 每秒传输字节数
* @returns 格式化的剩余时间
*/
export function calculateRemainingTime(remainingBytes: number, bytesPerSecond: number): string {
if (bytesPerSecond <= 0 || remainingBytes <= 0) {
return '--:--';
}
const remainingSeconds = remainingBytes / bytesPerSecond;
return formatTime(remainingSeconds);
}