Recherche avancée

Médias (91)

Autres articles (62)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (11972)

  • Flutter : FFmpeg Not Executing Filter

    5 novembre 2022, par Dennis Ashford

    I am downloading a video from Firebase and then trying to apply a watermark to that video that will then be saved in a temporary directory in the cache. I am using the ffmpeg_kit_flutter package to do this. There is very little online about how this should work in Flutter.

    


    The video and image are loaded and stored in the cache properly. However, the FFmpegKit.execute call is not working and does not create a new output file in the cache. Any suggestions on how to make that function execute ? Are the FFmpeg commands correct ?

    


    Future<string> waterMarkVideo(String videoPath, String watermarkPath) async {&#xA;    //these calls are to load the video into temporary directory&#xA;    final response = await http.get(Uri.parse(videoPath));&#xA;    final originalVideo = File (&#x27;${(await getTemporaryDirectory()).path}/video.mp4&#x27;);&#xA;    await originalVideo.create(recursive: true);&#xA;    await originalVideo.writeAsBytes(response.bodyBytes);&#xA;    print(&#x27;video path&#x27; &#x2B; originalVideo.path);&#xA;&#xA;    //this grabs the watermark image from assets and decodes it&#xA;    final byteData = await rootBundle.load(watermarkPath);&#xA;    final watermark = File(&#x27;${(await getTemporaryDirectory()).path}/image.png&#x27;);&#xA;    await watermark.create(recursive: true);&#xA;    await watermark.writeAsBytes(byteData.buffer.asUint8List(byteData.offsetInBytes, byteData.lengthInBytes));&#xA;    print(&#x27;watermark path&#x27; &#x2B; watermark.path);&#xA;&#xA;    //this creates temporary directory for new watermarked video&#xA;    var tempDir = await getTemporaryDirectory();&#xA;    final newVideoPath = &#x27;${tempDir.path}/${DateTime.now().microsecondsSinceEpoch}result.mp4&#x27;;&#xA;&#xA;    //and now attempting to work with ffmpeg package to overlay watermark on video&#xA;    await FFmpegKit.execute("-i $originalVideo -i $watermark -filter_complex &#x27;overlay[out]&#x27; -map &#x27;[out]&#x27; $newVideoPath")&#xA;    .then((rc) => print(&#x27;FFmpeg process exited with rc $rc&#x27;));&#xA;    print(&#x27;new video path&#x27; &#x2B; newVideoPath);&#xA;&#xA;return newVideoPath;&#xA;  }&#xA;</string>

    &#xA;

    The logs give the file paths and also give this

    &#xA;

    FFmpegKitFlutterPlugin 0x600000000fe0 started listening to events on 0x600001a4b280.&#xA;flutter: Loaded ffmpeg-kit-flutter-ios-https-x86_64-4.5.1.&#xA;flutter: FFmpeg process exited with rc Instance of &#x27;FFmpegSession&#x27;&#xA;

    &#xA;

  • JW Player fails with error with wma files : Task Queue failed at step 5

    17 mars 2018, par Sabeena

    I have a JW Player which plays MP3 files but with WMA files it gives the error :

    Task Queue failed at step 5: Playlist could not be loaded: Playlist file did not contain a valid playlist

    I thought of two reasons :

    1. There is no support for WMA but please confirm me this.
    2. Somewhere I need to setup the type of file I am using in this player.

    If WMA not supported in JW Player how can I play WMA and MP3 files in my website ?

    Is ffmpeg needed to convert WMA to MP3 while uploading ?

  • How can I correctly provide a mock webcam video to Chrome ?

    15 décembre 2022, par doppelgreener

    I'm trying to run end-to-end testing in Chrome for a product that requires a webcam feed halfway through to operate. From what I understand this means providing a fake webcam video to Chrome using the --use-file-for-fake-video-capture="/path/to/video.y4m" command line argument. It will then use that as a webcam video.

    &#xA;&#xA;

    However, no matter what y4m file I provide, I get the following error from Chrome running under these conditions :

    &#xA;&#xA;

    DOMException: Could not start video source&#xA;{&#xA;  code: 0,&#xA;  message: "Could not start video source",&#xA;  name: "NotReadableError"&#xA;}&#xA;

    &#xA;&#xA;

    Notably I can provide an audio file just fine using --use-file-for-fake-audio-capture and Chrome will work with it well. The video has been my sticking point.

    &#xA;&#xA;

    This error comes out of the following straightforward mediaDevices request :

    &#xA;&#xA;

    navigator.mediaDevices.getUserMedia({ video: true, audio: true })&#xA;  .then(data => {&#xA;    // do stuff&#xA;  })&#xA;  .catch(err => {&#xA;    // oh no!&#xA;  });&#xA;

    &#xA;&#xA;

    (This always hits the “oh no !” branch when a video file is provided.)

    &#xA;&#xA;

    What I've tried so far

    &#xA;&#xA;

    I've been running Chrome with the following command line arguments (newlines added for readability), and I'm using a Mac hence the open command :

    &#xA;&#xA;&#xA;&#xA;

    open -a "Google Chrome" --args&#xA;  --disable-gpu&#xA;  --use-fake-device-for-media-stream&#xA;  --use-file-for-fake-video-capture="~/Documents/mock/webcam.y4m"&#xA;  --use-file-for-fake-audio-capture="~/Documents/mock/microphone.wav"&#xA;

    &#xA;&#xA;

    webcam.y4m and microphone.wav were generated from a video file I recorded.

    &#xA;&#xA;

    I first recorded a twenty-second mp4 video using my browser's MediaRecorder, downloaded the result, and converted it using the following command line commands :

    &#xA;&#xA;

    ffmpeg -y -i original.mp4 -f wav -vn microphone.wav&#xA;ffmpeg -y -i original.mp4 webcam.y4m&#xA;

    &#xA;&#xA;

    When this didn't work, I tried the same using a twenty-second movie file I recorded in Quicktime :

    &#xA;&#xA;

    ffmpeg -y -i original.mov -f wav -vn microphone.wav&#xA;ffmpeg -y -i original.mov webcam.y4m&#xA;

    &#xA;&#xA;

    When that also failed, I went straight to the Chromium file that explains fake video capture, went to the example y4m file list it provided, and downloaded the grandma file and provided that as a command line argument to Chrome instead :

    &#xA;&#xA;

    open -a "Google Chrome" --args&#xA;  --disable-gpu&#xA;  --use-fake-device-for-media-stream&#xA;  --use-file-for-fake-video-capture="~/Documents/mock/grandma_qcif.y4m"&#xA;  --use-file-for-fake-audio-capture="~/Documents/mock/microphone.wav"&#xA;

    &#xA;&#xA;

    Chrome provides me with the exact same error in all of these situations.

    &#xA;&#xA;

    The only time Chrome doesn't error out with that mediaDevices request is when I omit the video completely :

    &#xA;&#xA;

    open -a "Google Chrome" --args&#xA;  --disable-gpu&#xA;  --use-fake-device-for-media-stream&#xA;  --use-file-for-fake-audio-capture="~/Documents/mock/microphone.wav"&#xA;

    &#xA;&#xA;

    Accounting for C420mpeg2

    &#xA;&#xA;

    TestRTC suggests Chrome will “crash” if I give it a C420mpeg2 file, and recommends that simply replacing the metadata fixes the issue. Indeed the video file I generate from ffmpeg gives me the following header :

    &#xA;&#xA;

    YUV4MPEG2 W1280 H720 F30:1 Ip A1:1 C420mpeg2 XYSCSS=420MPEG2&#xA;

    &#xA;&#xA;

    Chrome doesn't actually crash when run with this file, I just get the error above. If I edit the video file to the following header though per TestRTC's recommendations I get the same situation :

    &#xA;&#xA;

    YUV4MPEG2 W1280 H720 F30:1 Ip A1:1 C420 XYSCSS=420MPEG2&#xA;

    &#xA;&#xA;

    The video file still gives me the above error in these conditions.

    &#xA;&#xA;

    What can/should I do ?

    &#xA;&#xA;

    How should I be providing a video file to Chrome for this command line argument ?

    &#xA;&#xA;

    How should I be recording or creating the video file ?

    &#xA;&#xA;

    How should I convert it to y4m ?

    &#xA;