
Recherche avancée
Autres articles (78)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (7677)
-
ffmpeg : Extracted wav from mp4 video does not have equal duration as the original video
26 juillet 2021, par John SmithI have a mp4 video that is 0.92 seconds, and I am trying to extract the audio of the video to a wav format. I have tried several commands (I have provided a list of some of the commands that I have tried), however, the resulting wav does not have the same duration as the original video (the resulting wav often has a duration of 0.96 seconds instead of 0.92 seconds). Ensuring that the video and audio are synchronous is crucial for what I am doing (the videos are typically videos of a person speaking, and it is important that the speech (audio) is in-sync with the mouth movements of the speaker).


I find it odd that, by extracting audio from a video, the duration changes, even despite what is happening under the hood for the conversion (in terms of codecs used, etc).


Some of the commands that I've tried include :


ffmpeg -i <input /> -c copy -map 0:a -sample_rate 16000 <output>

ffmpeg -i <input /> -async 1 -f wav <output>

ffmpeg -i <input /> -vn -acodec copy <output>

ffmpeg -i <input /> -ac 2 -f wav <output>
</output></output></output></output>


Any insight would be highly appreciated.
Thanks !


Edit Output of the command
ffmpeg -ignore_editlist true -i 00026.mp4 output.wav


ffmpeg version 2021-02-28-git-85ab9deb98-full_build-www.gyan.dev Copyright (c) 2000-2021 the FFmpeg developers
 built with gcc 10.2.0 (Rev6, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-lib
snappy --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --en
able-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libvidstab --e
nable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libglslang --enable-vulkan --
enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwben
c --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --en
able-libsoxr --enable-chromaprint
 libavutil 56. 66.100 / 56. 66.100
 libavcodec 58.126.100 / 58.126.100
 libavformat 58. 68.100 / 58. 68.100
 libavdevice 58. 12.100 / 58. 12.100
 libavfilter 7.107.100 / 7.107.100
 libswscale 5. 8.100 / 5. 8.100
 libswresample 3. 8.100 / 3. 8.100
 libpostproc 55. 8.100 / 55. 8.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '00026.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2mp41
 encoder : Lavf57.37.101
 Duration: 00:00:00.98, start: 0.000000, bitrate: 599 kb/s
 Stream #0:0(und): Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D), yuv420p, 160x160 [SAR 1:1 DAR 1:1], 556 kb/s, 25 fps, 25 tbr, 12800 tbn, 25 tbc (default)
 Metadata:
 handler_name : VideoHandler
 vendor_id : [0][0][0][0]
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 16000 Hz, mono, fltp, 65 kb/s (default)
 Metadata:
 handler_name : SoundHandler
 vendor_id : [0][0][0][0]
Stream mapping:
 Stream #0:1 -> #0:0 (aac (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, wav, to 'output.wav':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2mp41
 ISFT : Lavf58.68.100
 Stream #0:0(und): Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, mono, s16, 256 kb/s (default)
 Metadata:
 handler_name : SoundHandler
 vendor_id : [0][0][0][0]
 encoder : Lavc58.126.100 pcm_s16le
size= 32kB time=00:00:00.96 bitrate= 273.7kbits/s speed= 212x
video:0kB audio:32kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.238037%



-
In Flutter, how to get image pixel
12 janvier 2024, par Pianonemy code here


var response = await Dio().get(
 url,
 options: Options(responseType: ResponseType.bytes)
);
Uint8List? srcImage = Uint8List.fromList(response.data);
Uint8List? watermark = await captureWaterMark();
Image i = Image.memory(srcImage!);
//how can I get the pixel (Image i) such like 1920*1080 or just width/hight pixel
...tell me how to do...
srcImage = await addWaterMarkByFfmpegCommand(srcImage, watermark);
 final result = await ImageGallerySaver.saveImage(
 srcImage!, name: name,
 );



i need get the pic pixel so that i can use it in ffmpeg command, it a func that add a watermark into srcImage, but cause their pixel ratio too diff to adapted watermark


i try to get pixel from ffmpeg... but i failed


/// addWaterMark by using ffmpeg Command
Future addWaterMarkByFfmpegCommand(Uint8List srcImg, Uint8List watermark) async {
 try {
 final Directory tempDir = await Directory.systemTemp.createTemp();
 final File image1File = File('${tempDir.path}/srcImg.jpg');
 await image1File.writeAsBytes(srcImg);
 final File image2File = File('${tempDir.path}/watermark.png');
 await image2File.writeAsBytes(watermark);

 final String outputFilePath = '${tempDir.path}/output.jpg';
 //when i get srcImage pixel, the positions in bold(iw*1) in the following commands will be replaced to make watermark adapted
 final String command =
 '-i ${image1File.path} -i ${image2File.path} -filter_complex "[1:v]scale=**iw*1**:-1[v1];[0:v][v1]overlay=10:10" -frames:v 1 $outputFilePath';
 await FFmpegKit.execute(command);

 final File outputFile = File(outputFilePath);
 final Uint8List outputBytes = await outputFile.readAsBytes();
 return outputBytes;
 } catch (e) {
 print('Error executing ffmpeg command: $e');
 }
 return null;
}



ps : i am new to flutter and ffmpeg, plz help me, I'd appreciate, thanks alot


-
Is there a way to horizontal flip video captured from flutter front camera
5 mai 2024, par JoyJoyBasically, I'm trying to flip video horizontally after capturing it from flutter front camera. So I start recording, stop recording, flip the video and pass it to another page. I'm fairly new and would appreciate any assistance as my code isn't working


I've tried doing so using the new ffmpeg_kit flutter


Future<void> flipVideo(String inputPath, String outputPath) async{
final ffmpegCommand = "-i $inputPath -vf hflip $outputPath";
final session = FFmpegKit.executeAsync(ffmpegCommand);
await session.then((session) async {
 final returnCode = await session.getReturnCode();
 if (ReturnCode.isSuccess(returnCode)) {
 print('Video flipping successful');
 } else {
 print('Video flipping failed: ${session.getAllLogs()}');
 }
});}

void stopVideoRecording() async {
XFile videopath = await cameraController.stopVideoRecording();

try {
 final Directory appDocDir = await 
 getApplicationDocumentsDirectory();
 final String outputDirectory = appDocDir.path;
 final String timeStamp = DateTime.now().millisecondsSinceEpoch.toString();
 final String outputPath = '$outputDirectory/flipped_video_$timeStamp.mp4';

 await flipVideo(videopath.path, outputPath);

 // Once completed,
 Navigator.push(
 context,
 MaterialPageRoute(
 builder: (builder) => VideoViewPage(
 path: File(outputPath),
 fromFrontCamera: iscamerafront,
 flash: flash,
 )));
 print('Video flipping completed');
} catch (e) {
 print('Error flipping video: $e');
}
</void>


}