Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (30)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (5898)

  • Decompressing time-To-Sample table (STTS) in an MP4 file

    30 mai 2016, par man-r

    i have an mp4 video byte array and i need to generate a thumbnail for it using its middle frame (e.g. if the video length is 10 seconds then i need to get the picture from 5th second).

    i managed to parse through the file and extract its boxes (atom). i have also managed to get the video length from the mvhd box. also i managed to extract
    1. the time-To-Sample table from stts box,
    2. the sample-To-Chunk table from stcs box,
    3. the chunk-Offset table from stco box,
    4. the sample Size table from stsz box,
    5. the Sync Sample table from stss box

    i know that all the actual media are available in the mdat box and that i need to correlate the above table to find the exact frame offset in the file but my question is how ? the tables data seems to be compressed (specially the time-To-Sample table) but i don’t know how decompress them.

    any help is appreciated.

    below are code samples

    code to convert byte to hex

    public static char[] bytesToHex(byte[] bytes) {
       char[] hexChars = new char[bytes.length * 2];
       for ( int j = 0; j < bytes.length; j++ ) {
           int v = bytes[j] & 0xFF;

           hexChars[j * 2] = hexArray[v >>> 4];
           hexChars[j * 2 + 1] = hexArray[v & 0x0F];            
       }
       return hexChars;
    }

    code for getting the box offset

    final static String MOOV                          = "6D6F6F76";
    final static String MOOV_MVHD                     = "6D766864";
    final static String MOOV_TRAK                     = "7472616B";
    final static String MOOV_TRAK_MDIA                = "6D646961";
    final static String MOOV_TRAK_MDIA_MINF           = "6D696E66";
    final static String MOOV_TRAK_MDIA_MINF_STBL      = "7374626C";
    final static String MOOV_TRAK_MDIA_MINF_STBL_STSD = "73747364";
    final static String MOOV_TRAK_MDIA_MINF_STBL_STTS = "73747473";
    final static String MOOV_TRAK_MDIA_MINF_STBL_STSS = "73747373";
    final static String MOOV_TRAK_MDIA_MINF_STBL_STSC = "73747363";
    final static String MOOV_TRAK_MDIA_MINF_STBL_STCO = "7374636F";
    final static String MOOV_TRAK_MDIA_MINF_STBL_STSZ = "7374737A";

    static int getBox(char[] s, int offset, String type) {
       int typeOffset = -1;
       for (int i = offset*2; i-1) {
                   break;
               }
           }
           i+=(size*2);
       }

       return typeOffset;
    }

    code for getting the duration and timescale

    static int[] getDuration(char[] s) {
       int mvhdOffset = getBox(s, 0, MOOV_MVHD);
       int timeScaleStart = (mvhdOffset*2) + (4 + 4 + 1 + 3 + 4 + 4)*2;
       int timeScaleEnd   = (mvhdOffset*2) + (4 + 4 + 1 + 3 + 4 + 4 + 4)*2;

       int durationStart  = (mvhdOffset*2) + (4 + 4 + 1 + 3 + 4 + 4 + 4)*2;
       int durationEnd    = (mvhdOffset*2) + (4 + 4 + 1 + 3 + 4 + 4 + 4 + 4)*2;

       String timeScaleHex = new String(Arrays.copyOfRange(s, timeScaleStart, timeScaleEnd));
       String durationHex = new String(Arrays.copyOfRange(s, durationStart, durationEnd));

       int timeScale = Integer.parseInt(timeScaleHex, 16);
       int duration = Integer.parseInt(durationHex, 16);

       int[] result = {duration, timeScale};
       return result;
    }

    code to get the time-To-Sample table

    static int[][] getTimeToSampleTable(char[] s, int trakOffset) {
       int offset = getBox(s, trakOffset, MOOV_TRAK_MDIA_MINF_STBL_STTS);
       int sizeStart = offset*2;
       int sizeEnd   = offset*2 + (4)*2;

       int typeStart = offset*2 + (4)*2;
       int typeEnd   = offset*2 + (4 + 4)*2;

       int noOfEntriesStart = offset*2 + (4 + 4 + 1 + 3)*2;
       int noOfEntriesEnd   = offset*2 + (4 + 4 + 1 + 3 + 4)*2;

       String sizeHex = new String(Arrays.copyOfRange(s, sizeStart, sizeEnd));
       String typeHex = new String(Arrays.copyOfRange(s, typeStart, typeEnd));
       String noOfEntriesHex = new String(Arrays.copyOfRange(s, noOfEntriesStart, noOfEntriesEnd));

       int size = Integer.parseInt(sizeHex, 16);
       int noOfEntries = Integer.parseInt(noOfEntriesHex, 16);

       int[][] timeToSampleTable = new int[noOfEntries][2];

       for (int i = 0; icode>
  • Flutter : FFmpeg Not Executing Filter

    5 novembre 2022, par Dennis Ashford

    I am downloading a video from Firebase and then trying to apply a watermark to that video that will then be saved in a temporary directory in the cache. I am using the ffmpeg_kit_flutter package to do this. There is very little online about how this should work in Flutter.

    


    The video and image are loaded and stored in the cache properly. However, the FFmpegKit.execute call is not working and does not create a new output file in the cache. Any suggestions on how to make that function execute ? Are the FFmpeg commands correct ?

    


    Future<string> waterMarkVideo(String videoPath, String watermarkPath) async {&#xA;    //these calls are to load the video into temporary directory&#xA;    final response = await http.get(Uri.parse(videoPath));&#xA;    final originalVideo = File (&#x27;${(await getTemporaryDirectory()).path}/video.mp4&#x27;);&#xA;    await originalVideo.create(recursive: true);&#xA;    await originalVideo.writeAsBytes(response.bodyBytes);&#xA;    print(&#x27;video path&#x27; &#x2B; originalVideo.path);&#xA;&#xA;    //this grabs the watermark image from assets and decodes it&#xA;    final byteData = await rootBundle.load(watermarkPath);&#xA;    final watermark = File(&#x27;${(await getTemporaryDirectory()).path}/image.png&#x27;);&#xA;    await watermark.create(recursive: true);&#xA;    await watermark.writeAsBytes(byteData.buffer.asUint8List(byteData.offsetInBytes, byteData.lengthInBytes));&#xA;    print(&#x27;watermark path&#x27; &#x2B; watermark.path);&#xA;&#xA;    //this creates temporary directory for new watermarked video&#xA;    var tempDir = await getTemporaryDirectory();&#xA;    final newVideoPath = &#x27;${tempDir.path}/${DateTime.now().microsecondsSinceEpoch}result.mp4&#x27;;&#xA;&#xA;    //and now attempting to work with ffmpeg package to overlay watermark on video&#xA;    await FFmpegKit.execute("-i $originalVideo -i $watermark -filter_complex &#x27;overlay[out]&#x27; -map &#x27;[out]&#x27; $newVideoPath")&#xA;    .then((rc) => print(&#x27;FFmpeg process exited with rc $rc&#x27;));&#xA;    print(&#x27;new video path&#x27; &#x2B; newVideoPath);&#xA;&#xA;return newVideoPath;&#xA;  }&#xA;</string>

    &#xA;

    The logs give the file paths and also give this

    &#xA;

    FFmpegKitFlutterPlugin 0x600000000fe0 started listening to events on 0x600001a4b280.&#xA;flutter: Loaded ffmpeg-kit-flutter-ios-https-x86_64-4.5.1.&#xA;flutter: FFmpeg process exited with rc Instance of &#x27;FFmpegSession&#x27;&#xA;

    &#xA;

  • Xuggler can't open IContainer of icecast server [Webm live video stream]

    9 juin 2016, par Roy Bean

    I’m trying to stream a live webm stream.

    I tested some server and Icecast is my pic.

    With ffmpeg capturing from an IP camera and publishing in icecast server I’m able to see video in html5

    using this command :

    ffmpeg.exe -rtsp_transport tcp -i "rtsp ://192.168.230.121/profile ?token=media_profile1&SessionTimeout=60" -f webm -r 20 -c:v libvpx -b:v 3M -s 300x200 -acodec none -content_type video/webm -crf 63 -g 0 icecast ://source:hackme@192.168.0.146:8001/test

    I’m using java and tryed to make this with xuggler, but I’m getting an error when opening the stream

    final String urlOut = "icecast://source:hackme@192.168.0.146:8001/agora.webm";
       final IContainer    outContainer = IContainer.make();

       final IContainerFormat outContainerFormat = IContainerFormat.make();
       outContainerFormat.setOutputFormat("webm", urlOut, "video/webm");

       int rc = outContainer.open(urlOut, IContainer.Type.WRITE, outContainerFormat);

       if(rc>=0) {
       }else  {
           Logger.getLogger(WebmPublisher.class.getName()).log(Level.INFO, "Fail to open Container " + IError.make(rc));
       }

    Any help ?
    I’m getting the error -2 :
    Error : could not open file (../../../../../../../csrc/com/xuggle/xuggler/Container.cpp:544)

    It’s is also very importatn to set the content type as video/webm because icecast by default set the mime type to audio/mpeg