Recherche avancée

Médias (91)

Autres articles (35)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (8977)

  • Web-based video editor

    13 avril 2021, par Danny

    We have a web-based editor currently that allows users to build animated web apps. The apps are made up of shapes, text, images, and videos. Except for videos, all other elements can also be animated around the screen. The result of building a animated app is basically a big blob of JSON.

    



    The playback code for the web app is web-based as well. It takes the JSON blob and constructs the HTML, which ends up playing back in some sort of browser environment. The problem is that most of the time this playback occurs on lower-end hardware like televisions and set-top boxes.

    



    These performance issues go away if there is some way to be able to convert a digital sign to video. Then the STB/smart TV simply plays a video, which is much more performant than playing back animations in a web view.

    



    Given a blob of JSON describing each layer and how to draw each type of object, its animation points, etc, how could I somehow take that and convert it to video on the server ?

    



    My first attempt at this was using PhantomJS to load the playback page in a headless browser, take a series of screenshots, and then use ffmpeg to merge those screenshots into a video. That worked great so long as there is no video. But it does not work with video since there is no HTML5 video tag support in PhantomJS, and even if there was, I would lose any audio.

    



    The other way I was thinking of doing it would be to again load the playback page in PhantomJS, but turn off the video layers and leave them transparent, then take screenshots as a series of PNGs with transparency. I would then combine these with the video layers.

    



    None of this feels very elegant though. I know there are web-based video editors out there that basically do what I'm trying to accomplish, so how do they do it ?

    


  • Is there a way to horizontal flip video captured from flutter front camera

    5 mai 2024, par JoyJoy

    Basically, I'm trying to flip video horizontally after capturing it from flutter front camera. So I start recording, stop recording, flip the video and pass it to another page. I'm fairly new and would appreciate any assistance as my code isn't working

    


    I've tried doing so using the new ffmpeg_kit flutter

    


    Future<void> flipVideo(String inputPath, String outputPath) async{&#xA;final ffmpegCommand = "-i $inputPath -vf hflip $outputPath";&#xA;final session = FFmpegKit.executeAsync(ffmpegCommand);&#xA;await session.then((session) async {&#xA;  final returnCode = await session.getReturnCode();&#xA;  if (ReturnCode.isSuccess(returnCode)) {&#xA;    print(&#x27;Video flipping successful&#x27;);&#xA;  } else {&#xA;    print(&#x27;Video flipping failed: ${session.getAllLogs()}&#x27;);&#xA;  }&#xA;});}&#xA;&#xA;void stopVideoRecording() async {&#xA;XFile videopath = await cameraController.stopVideoRecording();&#xA;&#xA;try {&#xA;  final Directory appDocDir = await &#xA;  getApplicationDocumentsDirectory();&#xA;  final String outputDirectory = appDocDir.path;&#xA;  final String timeStamp = DateTime.now().millisecondsSinceEpoch.toString();&#xA;  final String outputPath = &#x27;$outputDirectory/flipped_video_$timeStamp.mp4&#x27;;&#xA;&#xA;  await flipVideo(videopath.path, outputPath);&#xA;&#xA;  // Once completed,&#xA;   Navigator.push(&#xA;    context,&#xA;    MaterialPageRoute(&#xA;      builder: (builder) => VideoViewPage(&#xA;        path: File(outputPath),&#xA;        fromFrontCamera: iscamerafront,&#xA;        flash: flash,&#xA;      )));&#xA;  print(&#x27;Video flipping completed&#x27;);&#xA;} catch (e) {&#xA;  print(&#x27;Error flipping video: $e&#x27;);&#xA;}&#xA;</void>

    &#xA;

    }

    &#xA;

  • FFmpeg to RTMP - no audio on output [closed]

    25 mars 2022, par John Mergene Arellano

    From my client side, I am sending a stream using the Socket.IO library. I captured the video and audio using getUserMedia API.

    &#xA;

    navigator.mediaDevices.getUserMedia(constraints).then((stream) => {&#xA;    window.videoStream = video.srcObject = stream;&#xA;    let mediaRecorder = new MediaRecorder(stream, {&#xA;        videoBitsPerSecond : 3 * 1024 * 1024&#xA;    });&#xA;    mediaRecorder.addEventListener(&#x27;dataavailable&#x27;, (e) => {&#xA;        let data = e.data;&#xA;        socket.emit(&#x27;live&#x27;, data);&#xA;    });&#xA;    mediaRecorder.start(1000);&#xA;});&#xA;

    &#xA;

    Then my server will receive the stream and write it to FFmpeg.

    &#xA;

    client.on(&#x27;live&#x27;, (stream)=>{&#xA;   if(ffmpeg)&#xA;       ffmpeg.stdin.write(stream);&#xA;});&#xA;

    &#xA;

    I tried watching the live video in VLC media player. There is a 5 seconds delay and no audio output.

    &#xA;

    Please see below for FFmpeg options I used :

    &#xA;

    ffmpeg = this.CHILD_PROCESS.spawn("ffmpeg", [&#xA;   &#x27;-f&#x27;,&#xA;   &#x27;lavfi&#x27;,&#xA;   &#x27;-i&#x27;, &#x27;anullsrc&#x27;,&#xA;   &#x27;-i&#x27;,&#x27;-&#x27;,&#xA;   &#x27;-c:v&#x27;, &#x27;libx264&#x27;, &#x27;-preset&#x27;, &#x27;veryfast&#x27;, &#x27;-tune&#x27;, &#x27;zerolatency&#x27;,&#xA;   &#x27;-c:a&#x27;, &#x27;aac&#x27;, &#x27;-ar&#x27;, &#x27;44100&#x27;, &#x27;-b:a&#x27;, &#x27;64k&#x27;,&#xA;   &#x27;-y&#x27;, //force to overwrite&#xA;   &#x27;-use_wallclock_as_timestamps&#x27;, &#x27;1&#x27;, // used for audio sync&#xA;   &#x27;-async&#x27;, &#x27;1&#x27;, // used for audio sync&#xA;   &#x27;-bufsize&#x27;, &#x27;1000&#x27;,&#xA;   &#x27;-f&#x27;,&#xA;   &#x27;flv&#x27;,&#xA;   `rtmp://127.0.0.1:1935/live/stream` ]);&#xA;

    &#xA;

    What is wrong with my setup ?

    &#xA;

    I tried removing some of the options but failed. I am expecting to have an output of video and audio from the getUserMedia API.

    &#xA;