
Recherche avancée
Médias (2)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (43)
-
Submit enhancements and plugins
13 avril 2011If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone. -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (5689)
-
Cannot concatenate videos ffmpeg [on hold]
1er mai 2014, par Paul PrescodI have a bitmap that I would like to concatenate to the front of many videos as a sort of title screen or disclaimer screen.
I try to turn it into a video with the same attributes as the rest of the video. So first I introspect the video :
ffmpeg version 2.2.1 Copyright (c) 2000-2014 the FFmpeg developers
built on Apr 11 2014 22:50:38 with Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)
configuration: --prefix=/usr/local/Cellar/ffmpeg/2.2.1 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-avresample --enable-vda --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-libxvid
libavutil 52. 66.100 / 52. 66.100
libavcodec 55. 52.102 / 55. 52.102
libavformat 55. 33.100 / 55. 33.100
libavdevice 55. 10.100 / 55. 10.100
libavfilter 4. 2.100 / 4. 2.100
libavresample 1. 2. 0 / 1. 2. 0
libswscale 2. 5.102 / 2. 5.102
libswresample 0. 18.100 / 0. 18.100
libpostproc 52. 3.100 / 52. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'EO1.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 1970-01-01 00:00:00
encoder : Lavf52.78.3
Duration: 00:00:17.77, start: 0.000000, bitrate: 582 kb/s
Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 584x328 [SAR 1:1 DAR 73:41], 512 kb/s, 23.98 fps, 23.98 tbr, 1199 tbn, 47.96 tbc (default)
Metadata:
creation_time : 1970-01-01 00:00:00
handler_name : VideoHandler
Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 64 kb/s (default)
Metadata:
creation_time : 1970-01-01 00:00:00
handler_name : SoundHandlerThen I try and create a similar file :
/usr/local/Cellar/ffmpeg/2.2.1/bin/ffmpeg -y -loop 1 -i Disclaimer.png -c:v libx264 -r 23.98 -t 5 -pix_fmt yuv420p -profile:v main disclaimer.mp4
It seems to work okay. The video plays I would expect it to. The attributes turn out very similar. Here is a diff :
< Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'disclaimer.mp4':
---
> Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'EO1.mp4':
18,20c18,21
< encoder : Lavf55.33.100
< Duration: 00:00:05.01, start: 0.000000, bitrate: 21 kb/s
< Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 584x328 [SAR 1:1 DAR 73:41], 17 kb/s, 23.98 fps, 23.98 tbr, 19184 tbn, 47.96 tbc (default)
---
> creation_time : 1970-01-01 00:00:00
> encoder : Lavf52.78.3
> Duration: 00:00:17.77, start: 0.000000, bitrate: 582 kb/s
> Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 584x328 [SAR 1:1 DAR 73:41], 512 kb/s, 23.98 fps, 23.98 tbr, 1199 tbn, 47.96 tbc (default)
21a23
22a25,28
> Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 64 kb/s (default)
> Metadata:
> handler_name : SoundHandlerBut when I try to concatenate, I get errors.
> $ cat temporary.txt
file disclaimer.mp4
file EO1.mp4/usr/local/Cellar/ffmpeg/2.2.1/bin/ffmpeg -y -f concat -i temporary.txt -c copy output.mp4
[concat @ 0x7fd880806600] Estimating duration from bitrate, this may be inaccurate
Input #0, concat, from 'temporary.txt':
Duration: 00:00:00.02, start: 0.000000, bitrate: 17 kb/s
Stream #0:0: Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 584x328 [SAR 1:1 DAR 73:41], 17 kb/s, 23.98 fps, 23.98 tbr, 19184 tbn, 47.96 tbc
Output #0, mp4, to 'output.mp4':
Metadata:
encoder : Lavf55.33.100
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 584x328 [SAR 1:1 DAR 73:41], q=2-31, 17 kb/s, 23.98 fps, 19184 tbn, 19184 tbc
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[mp4 @ 0x7fd880829a00] Non-monotonous DTS in output stream 0:0; previous: 93600, current: 5951; changing to 93601. This may result in incorrect timestamps in the output file.
[concat @ 0x7fd880806600] Invalid stream index 1
[mp4 @ 0x7fd880829a00] Non-monotonous DTS in output stream 0:0; previous: 93601, current: 6001; changing to 93602. This may result in incorrect timestamps in the output file.
[concat @ 0x7fd880806600] Invalid stream index 1
[mp4 @ 0x7fd880829a00] Non-monotonous DTS in output stream 0:0; previous: 93602, current: 6051; changing to 93603. This may result in incorrect timestamps in the output file....
frame= 546 fps=0.0 q=-1.0 Lsize= 1127kB time=00:00:04.90 bitrate=1882.9kbits/s
video:1123kB audio:0kB subtitle:0 data:0 global headers:0kB muxing overhead 0.349865%The output looks like it only has my disclaimer file in it, not the rest of the video.
I’m also confused why it feels like it needs to "estimate" anything. It knows the input FPS and input durations. I’m not sure if this is the problem or not. Maybe its just a bug.
-
Merged Video Contains Inverted Clips After First Video Ends
3 février, par Nikunj AgrawalI am working on a Flutter application that merges multiple videos using
ffmpeg_kit_flutter
. However, after merging, I notice that the second video (and any subsequent ones) appear inverted or rotated in the final output.

Issue Details :


- 

- The first video appears normal.
- The videos can be recorded using both front and back cameras.
- The second (and later) videos are flipped or rotated upside down.
- This happens after merging using
ffmpeg_kit_flutter
.










Question :
How can I correctly merge multiple videos in Flutter without rotation issues ? Is there a way to normalize video orientation before merging using
ffmpeg_kit_flutter
?

Any help would be appreciated ! 🚀


Code :


import 'dart:io';
import 'dart:math';

import 'package:camera/camera.dart';
import 'package:ffmpeg_kit_flutter/ffmpeg_kit.dart';
import 'package:ffmpeg_kit_flutter/return_code.dart';
import 'package:flutter/material.dart';
import 'package:path_provider/path_provider.dart';
import 'package:permission_handler/permission_handler.dart';
import 'package:record/record.dart';
import 'package:videotest/video_player.dart';

class MergeVideoRecording extends StatefulWidget {
 const MergeVideoRecording({super.key});

 @override
 State<mergevideorecording> createState() => _MergeVideoRecordingState();
}

class _MergeVideoRecordingState extends State<mergevideorecording> {
 CameraController? _cameraController;
 final AudioRecorder _audioRecorder = AudioRecorder();

 bool _isRecording = false;
 String? _videoPath;
 String? _audioPath;
 List<cameradescription> _cameras = [];
 int _currentCameraIndex = 0;
 final List<string> _recordedVideos = [];

 @override
 Widget build(BuildContext context) {
 return Scaffold(
 body: Column(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 _cameraController != null && _cameraController!.value.isInitialized
 ? SizedBox(
 width: MediaQuery.of(context).size.width * 0.4,
 height: MediaQuery.of(context).size.height * 0.3,
 child: Stack(
 children: [
 ClipRRect(
 borderRadius: BorderRadius.circular(16),
 child: SizedBox(
 width: MediaQuery.of(context).size.width * 0.4,
 height: MediaQuery.of(context).size.height * 0.3,
 child: Transform(
 alignment: Alignment.center,
 transform:
 _cameras[_currentCameraIndex].lensDirection ==
 CameraLensDirection.front
 ? Matrix4.rotationY(pi)
 : Matrix4.identity(),
 child: CameraPreview(_cameraController!),
 ),
 ),
 ),
 Align(
 alignment: Alignment.topRight,
 child: InkWell(
 onTap: _switchCamera,
 child: const Padding(
 padding: EdgeInsets.all(8.0),
 child: CircleAvatar(
 radius: 18,
 backgroundColor: Colors.white,
 child: Icon(
 Icons.flip_camera_android,
 color: Colors.black,
 ),
 ),
 ),
 ),
 ),
 ],
 ),
 )
 : const CircularProgressIndicator(),
 const SizedBox(height: 16),
 Row(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 FloatingActionButton(
 heroTag: 'record_button',
 onPressed: _toggleRecording,
 child: Icon(
 _isRecording ? Icons.stop : Icons.video_camera_back,
 ),
 ),
 const SizedBox(
 width: 50,
 ),
 FloatingActionButton(
 heroTag: 'merge_button',
 onPressed: _mergeVideos,
 child: const Icon(
 Icons.merge,
 ),
 ),
 ],
 ),
 if (!_isRecording)
 ListView.builder(
 shrinkWrap: true,
 itemCount: _recordedVideos.length,
 itemBuilder: (context, index) => InkWell(
 onTap: () {
 Navigator.push(
 context,
 MaterialPageRoute(
 builder: (context) => VideoPlayerScreen(
 videoPath: _recordedVideos[index],
 ),
 ),
 );
 },
 child: ListTile(
 title: Text('Video ${index + 1}'),
 subtitle: Text('Path ${_recordedVideos[index]}'),
 trailing: const Icon(Icons.play_arrow),
 ),
 ),
 ),
 ],
 ),
 );
 }

 @override
 void dispose() {
 _cameraController?.dispose();
 _audioRecorder.dispose();
 super.dispose();
 }

 @override
 void initState() {
 super.initState();
 _initializeDevices();
 }

 Future<void> _initializeCameraController(CameraDescription camera) async {
 _cameraController = CameraController(
 camera,
 ResolutionPreset.high,
 enableAudio: true,
 imageFormatGroup: ImageFormatGroup.yuv420, // Add this line
 );

 await _cameraController!.initialize();
 await _cameraController!.setExposureMode(ExposureMode.auto);
 await _cameraController!.setFocusMode(FocusMode.auto);
 setState(() {});
 }

 Future<void> _initializeDevices() async {
 final cameraStatus = await Permission.camera.request();
 final micStatus = await Permission.microphone.request();

 if (!cameraStatus.isGranted || !micStatus.isGranted) {
 _showError('Camera and microphone permissions required');
 return;
 }

 _cameras = await availableCameras();
 if (_cameras.isNotEmpty) {
 final frontCameraIndex = _cameras.indexWhere(
 (camera) => camera.lensDirection == CameraLensDirection.front);
 _currentCameraIndex = frontCameraIndex != -1 ? frontCameraIndex : 0;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 }
 }

 // Merge video
 Future<void> _mergeVideos() async {
 if (_recordedVideos.isEmpty) {
 _showError('No videos to merge');
 return;
 }

 try {
 // Debug logging
 print('Starting merge process');
 print('Number of videos to merge: ${_recordedVideos.length}');
 for (var i = 0; i < _recordedVideos.length; i++) {
 final file = File(_recordedVideos[i]);
 final exists = await file.exists();
 final size = exists ? await file.length() : 0;
 print('Video $i: ${_recordedVideos[i]}');
 print('Exists: $exists, Size: $size bytes');
 }

 final Directory appDir = await getApplicationDocumentsDirectory();
 final String outputPath =
 '${appDir.path}/merged_${DateTime.now().millisecondsSinceEpoch}.mp4';
 final String listFilePath = '${appDir.path}/list.txt';

 print('Output path: $outputPath');
 print('List file path: $listFilePath');

 // Create and verify list file
 final listFile = File(listFilePath);
 final fileContent = _recordedVideos
 .map((path) => "file '${path.replaceAll("'", "'\\''")}'")
 .join('\n');
 await listFile.writeAsString(fileContent);

 print('List file content:');
 print(await listFile.readAsString());

 // Simpler FFmpeg command for testing
 final command = '''
 -f concat
 -safe 0
 -i "$listFilePath"
 -c copy
 -y
 "$outputPath"
 '''
 .trim()
 .replaceAll('\n', ' ');

 print('Executing FFmpeg command: $command');

 final session = await FFmpegKit.execute(command);
 final returnCode = await session.getReturnCode();
 final logs = await session.getAllLogsAsString();
 final failStackTrace = await session.getFailStackTrace();

 print('FFmpeg return code: ${returnCode?.getValue() ?? "null"}');
 print('FFmpeg logs: $logs');
 if (failStackTrace != null) {
 print('FFmpeg fail stack trace: $failStackTrace');
 }

 if (ReturnCode.isSuccess(returnCode)) {
 final outputFile = File(outputPath);
 final outputExists = await outputFile.exists();
 final outputSize = outputExists ? await outputFile.length() : 0;

 print('Output file exists: $outputExists');
 print('Output file size: $outputSize bytes');

 if (outputExists && outputSize > 0) {
 setState(() => _recordedVideos.add(outputPath));
 _showSuccess('Videos merged successfully');
 } else {
 _showError('Merged file is empty or not created');
 }
 } else {
 _showError('Failed to merge videos. Check logs for details.');
 }

 // Clean up
 try {
 await listFile.delete();
 print('List file cleaned up successfully');
 } catch (e) {
 print('Failed to delete list file: $e');
 }
 } catch (e, s) {
 print('Error during merge: $e');
 print('Stack trace: $s');
 _showError('Error merging videos: ${e.toString()}');
 }
 }

 void _showError(String message) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(content: Text(message), backgroundColor: Colors.red),
 );
 }

 void _showSuccess(String message) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(content: Text(message), backgroundColor: Colors.green),
 );
 }

 Future<void> _startAudioRecording() async {
 try {
 final Directory tempDir = await getTemporaryDirectory();
 final audioPath = '${tempDir.path}/recording.wav';
 await _audioRecorder.start(const RecordConfig(), path: audioPath);
 setState(() => _isRecording = true);
 } catch (e) {
 _showError('Recording start error: $e');
 }
 }

 Future<void> _startVideoRecording() async {
 try {
 await _cameraController!.startVideoRecording();
 setState(() => _isRecording = true);
 } catch (e) {
 _showError('Recording start error: $e');
 }
 }

 Future<void> _stopAndSaveAudioRecording() async {
 _audioPath = await _audioRecorder.stop();
 if (_audioPath != null) {
 final Directory appDir = await getApplicationDocumentsDirectory();
 final timestamp = DateTime.now().millisecondsSinceEpoch;
 final String audioFileName = 'audio_$timestamp.wav';
 await File(_audioPath!).copy('${appDir.path}/$audioFileName');
 _showSuccess('Saved: $audioFileName');
 }
 }

 Future<void> _stopAndSaveVideoRecording() async {
 try {
 final video = await _cameraController!.stopVideoRecording();
 _videoPath = video.path;

 if (_videoPath != null) {
 final Directory appDir = await getApplicationDocumentsDirectory();
 final timestamp = DateTime.now().millisecondsSinceEpoch;
 final String videoFileName = 'video_$timestamp.mp4';
 final savedVideoPath = '${appDir.path}/$videoFileName';
 await File(_videoPath!).copy(savedVideoPath);

 setState(() {
 _recordedVideos.add(savedVideoPath);
 _isRecording = false;
 });

 _showSuccess('Saved: $videoFileName');
 }
 } catch (e) {
 _showError('Recording stop error: $e');
 }
 }

 Future<void> _switchCamera() async {
 if (_cameras.length <= 1) return;

 if (_isRecording) {
 await _stopAndSaveVideoRecording();
 _currentCameraIndex = (_currentCameraIndex + 1) % _cameras.length;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 await _startVideoRecording();
 } else {
 _currentCameraIndex = (_currentCameraIndex + 1) % _cameras.length;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 }
 }

 Future<void> _toggleRecording() async {
 if (_cameraController == null) return;

 if (_isRecording) {
 await _stopAndSaveVideoRecording();
 await _stopAndSaveAudioRecording();
 } else {
 _startVideoRecording();
 _startAudioRecording();
 setState(() => _recordedVideos.clear());
 }
 }
}
</void></void></void></void></void></void></void></void></void></string></cameradescription></mergevideorecording></mergevideorecording>


-
Java uses FFmpegRecoder to encode frames into H264 streams
5 septembre 2024, par zhang1973I want to obtain the Frame from the video stream, process it, use FFmpegRecoder to encode it into an H264 stream, and transmit it to the front-end. But I found that the AVPacket obtained directly using grabber.grabAVPacket can be converted into H264 stream and played normally. The H264 stream encoded using FFmpegRecoder cannot be played.


Here is my Code :


private FFmpegFrameRecorder recorder;
 private ByteArrayOutputStream outputStream = new ByteArrayOutputStream();;
 private boolean createRecoder(Frame frame){
 recorder = new FFmpegFrameRecorder(outputStream, frame.imageWidth, frame.imageHeight);
 recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
 recorder.setFormat("h264"); //"h264"); //
 recorder.setFrameRate(30);
 recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420P);
 recorder.setVideoBitrate(4000 * 1000); // 设置比特率为4000 kbps
 recorder.setVideoOption("preset", "ultrafast"); // 设置编码器预设,"ultrafast"是最快的,"veryslow"是最慢但质量最好
 recorder.setAudioChannels(0);

 try {
 recorder.start();
 return recorderStatus = true;
 } catch (org.bytedeco.javacv.FrameRecorder.Exception e1) {
 log.info("启动转码录制器失败", e1);
 MediaService.cameras.remove(cameraDto.getMediaKey());
 e1.printStackTrace();
 }

 return recorderStatus = false;
 }

 private boolean slow = false;
 protected void transferStream2H264() throws FFmpegFrameGrabber.Exception {

 // 初始化和拉去图像的方法
 log.info(" create grabber ");
 if (!createGrabber()) {
 log.error(" == > ");
 return;
 }
 transferFlag = true;

 if(!createRecoder(grabber.grab())){
 return;
 }

 try {
 grabber.flush();
 } catch (Exception e) {
 log.info("清空拉流器缓存失败", e);
 e.printStackTrace();
 }

 if (header == null) {
 header = bos.toByteArray();
 slow = true;
// System.out.println("Header1");
// System.out.println(header);
 bos.reset();
 }else{
 System.out.println("Header2");
 System.out.println(header);
 }

 running = true;

 // 事实更新所有的连接数
 listenClient();

 long startTime = 0;
 long videoTS = 0;

 for (; running && grabberStatus;) {
 try {
 if (transferFlag) {
 long startGrab = System.currentTimeMillis();
 //视频采集器
// AVPacket pkt = grabber.grabPacket();
 Frame frame = grabber.grab();
 recorder.record(frame);
 byte[] videoData = outputStream.toByteArray();
 if ((System.currentTimeMillis() - startGrab) > 5000) {
 log.info("\r\n{}\r\n视频流网络异常>>>", cameraDto.getUrl());
 closeMedia();
 break;
 }

 videoTS = 1000 * (System.currentTimeMillis() - startTime);


 if (startTime == 0) {
 startTime = System.currentTimeMillis();
 }
 videoTS = 1000 * (System.currentTimeMillis() - startTime);

 byte[] rbuffer = videoData;
 readSize = videoData.length;

 if(spsdata == null || ppsdata == null){
 movePos = 0;
 lastPos = 0;
 isNewPack = true;
 while(movePos < readSize){
 if (rbuffer[movePos] == 0 && rbuffer[movePos + 1] == 0 && rbuffer[movePos + 2] == 1) {
 findCode = true;
 skipLen = 3;
 mCurFrameFirstByte = (int)(0xff & rbuffer[movePos + skipLen]);
 } else if (rbuffer[movePos] == 0 && rbuffer[movePos + 1] == 0 && rbuffer[movePos + 2] == 0 && rbuffer[movePos + 3] == 1) {
 findCode = true;
 skipLen = 4;
 mCurFrameFirstByte = (int)(0xff & rbuffer[movePos + skipLen]);
 } else {
 skipLen = 1;
 }

 if(!isFirstFind && isNewPack && findCode){
 mFrameFirstByte = mCurFrameFirstByte;
 findCode = false;
 isNewPack = false;
 mNaluType = mFrameFirstByte & 0x1f;
 if(mNaluType != MediaConstant.NALU_TYPE_SEI &&
 mNaluType != MediaConstant.NALU_TYPE_SPS &&
 mNaluType != MediaConstant.NALU_TYPE_PPS &&
 mNaluType != MediaConstant.NALU_TYPE_IDR){
 startCounter++;
 break;
 }
 }

 if(isFirstFind){
 isFirstFind = false;
 findCode = false;
 mFrameFirstByte = mCurFrameFirstByte;
 }

 if(findCode){
 startCounter++;
 mNaluType = mFrameFirstByte & 0x1f;

 findCode = false;
 mFrameLen = (movePos - lastPos);
 if(mNaluType == MediaConstant.NALU_TYPE_IDR){
 mFrameLen = readSize - movePos;
 }

 if(mNaluType != MediaConstant.NALU_TYPE_SEI &&
 mNaluType != MediaConstant.NALU_TYPE_SPS &&
 mNaluType != MediaConstant.NALU_TYPE_PPS &&
 mNaluType != MediaConstant.NALU_TYPE_IDR){
 System.out.println(" one packe many frames ---> type: " + mNaluType + " jump out ");
 break;
 }
 if(mNaluType == MediaConstant.NALU_TYPE_SPS){
 if(null == spsdata){
 spsdata = new byte[mFrameLen];
 System.arraycopy(rbuffer, lastPos, spsdata, 0, mFrameLen);
 }
 }
 if(mNaluType == MediaConstant.NALU_TYPE_PPS){

 if(null == ppsdata){
 ppsdata = new byte[mFrameLen];
 System.arraycopy(rbuffer, lastPos, ppsdata, 0, mFrameLen);
 }
 }

 lastPos = movePos;
 mFrameFirstByte = mCurFrameFirstByte;
 mNaluType = mFrameFirstByte & 0x1f;
 if(mNaluType == MediaConstant.NALU_TYPE_IDR){
 mFrameLen = readSize - movePos;
 startCounter++;

 break;
 }
 }

 movePos += skipLen;
 isNewPack = false;
 }
 }

 sendFrameData(rbuffer);
// }
// av_packet_unref(pkt);
// }

// }
 } else {
 }
 } catch (Exception e) {
 grabberStatus = false;
 MediaService.cameras.remove(cameraDto.getMediaKey());
 } catch (FFmpegFrameRecorder.Exception e) {
 throw new RuntimeException(e);
 }
 }

 try {
 grabber.close();
 bos.close();
 } catch (org.bytedeco.javacv.FrameRecorder.Exception e) {
 e.printStackTrace();
 } catch (Exception e) {
 e.printStackTrace();
 } catch (IOException e) {
 e.printStackTrace();
 } finally {
 closeMedia();
 }
 log.info("关闭媒体流-javacv,{} ", cameraDto.getUrl());
 }