Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (100)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (8809)

  • ffmpeg-how to add text to video without encoding [on hold]

    27 décembre 2013, par user3139491

    I want to add text to video without encoding.how to do this.
    I create this code

    ffmpeg -i file.mov -vf drawtext="fontfile=/usr/share/fonts/truetype/ttf-dejavu/DejaVuSerif.ttf:text='Text to write':fontsize=20:fontcolor=black:x=100:y=100" final.mov

    this code word well but the cpu hit the peak.
    so only want to do this without encoding.
    is -vcodec copy map can be used ????

  • Flutter (Dart) : Record two videos, merge them and then view new video in gallery

    14 juin 2020, par Ittai Barkai

    I am aware that there already exists a solution to a very similar question, which can be found on the following link : Flutter/Dart : Find two video segments and merge them into a single valid video file ? However, being relatively new to Flutter (and programming in general) I cannot seem to replicate the desired result.

    



    My app is very simple and currently looks like this :

    



    enter image description here

    



    I click on the button "Record Video" to record two videos, which are both successfully stored into the device's gallery. Using the Flutter image_picker and gallery_saver packages and the following piece of code :

    



        void _recordVideo() async {
    ImagePicker.pickVideo(source: ImageSource.camera)
        .then((File recordedVideo) {
      if (recordedVideo != null && recordedVideo.path != null) {
        setState(() {
          _buttonText = 'Saving in Progress...';
        });
        GallerySaver.saveVideo(recordedVideo.path).then((_) {
          setState(() {
            _buttonText = 'Video Saved!\n\nClick to Record New Video';
            if (_storedVideoOne == null) {
              _storedVideoOne = recordedVideo;
              print('video 1 stored');
            } else {
              _storedVideoTwo = recordedVideo;
              print('video 2 stored');
              _videoMerger();
            }
          });
        });
      }
    });
  }


    



    I can view these videos when I click on the button at the bottom "View Video From Gallery".

    



    Next I try to merge these two stored video files, using the flutter_ffmpeg package, as well as following the solution provided in the stack overflow question mentioned above. I try and do this using the following function I wrote :

    



    void _videoMerger() async {


    final appDir = await syspaths.getApplicationDocumentsDirectory();
    String rawDocumentPath = appDir.path;
    final outputPath = '$rawDocumentPath/output.mp4';

    final FlutterFFmpeg _flutterFFmpeg = new FlutterFFmpeg();

        String commandToExecute = '-i ${_storedVideoOne.path} -i ${_storedVideoTwo.path} -filter_complex \'[0:0][1:0]concat=n=2:v=0:a=1[out]\' -map \'[out]\' outputPath';
        _flutterFFmpeg.execute(commandToExecute).then((rc) => print("FFmpeg process exited with rc $rc"));

  }


    



    But after running the function I do not seem to get a new combined video, which should be stored in outputPath and ideally also viewable in the gallery. Uploaded the Flutter project onto GitHub here :

    



    https://github.com/IttaiBarkai/Flutter-Video-Merger

    



    Any help would be greatly appreciated :)

    


  • Strange "pause" in video after concatenation of two mp4 videos

    3 octobre 2016, par user606521

    I am concatenating two mp4 videos. The problem is that first video (intro.mp4) lasts 5 seconds, second video (output.mp4) lasts 2 seconds and video created by concatenating them lasts 9 seconds (and should last 5+2 = 7 seconds). In final.mp4 video the last frame from first video (intro.mp4) i shown for additional 2 seconds before second video (output.mp4) is played. It looks like a lag when watching video. What I am doing wrong ?

    list.txt :

    file 'data/intro.mp4'
    file 'output.mp4'

    command :

    ./bin/ffmpeg -f concat -i list.txt -c copy final.mp4

    output :

    ffmpeg version 2.7.2 Copyright (c) 2000-2015 the FFmpeg developers
     built with llvm-gcc 4.2.1 (LLVM build 2336.11.00)
     configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx --disable-decoder=libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-avfilter --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-filters --enable-libgsm --enable-libvidstab --enable-libx265 --disable-doc --arch=x86_64 --enable-runtime-cpudetect
     libavutil      54. 27.100 / 54. 27.100
     libavcodec     56. 41.100 / 56. 41.100
     libavformat    56. 36.100 / 56. 36.100
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 16.101 /  5. 16.101
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  2.100 /  1.  2.100
     libpostproc    53.  3.100 / 53.  3.100
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f8a6a80e600] Auto-inserting h264_mp4toannexb bitstream filter
    Input #0, concat, from 'list.txt':
     Duration: N/A, start: 0.000000, bitrate: 1296 kb/s
       Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p, 960x540, 1163 kb/s, 23.98 fps, 23.98 tbr, 11988 tbn, 47.95 tbc
       Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 133 kb/s
    [mp4 @ 0x7f8a6c006a00] Codec for stream 0 does not use global headers but container format requires global headers
    [mp4 @ 0x7f8a6c006a00] Codec for stream 1 does not use global headers but container format requires global headers
    Output #0, mp4, to 'final.mp4':
     Metadata:
       encoder         : Lavf56.36.100
       Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 960x540, q=2-31, 1163 kb/s, 23.98 fps, 23.98 tbr, 11988 tbn, 11988 tbc
       Stream #0:1: Audio: aac ([64][0][0][0] / 0x0040), 48000 Hz, stereo, 133 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
     Stream #0:1 -> #0:1 (copy)
    Press [q] to stop, [?] for help
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f8a6a80e600] Auto-inserting h264_mp4toannexb bitstream filter
    frame=  258 fps=0.0 q=-1.0 Lsize=     997kB time=00:00:09.84 bitrate= 829.5kbits/s    
    video:897kB audio:93kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.634306%