Recherche avancée

Médias (0)

Mot : - Tags -/navigation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (33)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

Sur d’autres sites (4932)

  • avformat/mov : Add support for demuxing still HEIC images

    4 octobre 2023, par Vignesh Venkatasubramanian via ffmpeg-devel
    avformat/mov : Add support for demuxing still HEIC images
    

    They are similar to AVIF images (both use the HEIF container).
    The only additional work needed is to parse the hvcC box and put
    it in the extradata.

    With this patch applied, ffmpeg (when built with an HEVC decoder)
    is able to decode the files in
    https://github.com/nokiatech/heif/tree/gh-pages/content/images

    Also add a couple of fate tests with samples from
    https://github.com/nokiatech/heif_conformance/tree/master/conformance_files

    Partially fixes trac ticket #6521.

    Signed-off-by : Vignesh Venkatasubramanian <vigneshv@google.com>
    Signed-off-by : James Almer <jamrial@gmail.com>

    • [DH] libavformat/isom.h
    • [DH] libavformat/mov.c
    • [DH] tests/fate/mov.mak
    • [DH] tests/ref/fate/mov-heic-demux-still-image-1-item
    • [DH] tests/ref/fate/mov-heic-demux-still-image-multiple-items
  • ffmpeg problem with split & combine portions of same video [closed]

    10 février 2024, par icyGuy

    I'm trying to split & combine multiple portions of the same video file.

    &#xA;

    Since there are multiple portions that I want to combine, I want to use filtercomplex to do this.

    &#xA;

    Also, since some of the portions are quite small in duration, I want to use precise timestamps (setpts=PTS-STARTPTS)

    &#xA;

    Please see the code below inside the "script.txt" file.

    &#xA;

    [0:v]trim=0.823625:5.88717,setpts=PTS-STARTPTS[v0];&#xA;[0:a]atrim=0.823625:5.88717,asetpts=PTS-STARTPTS[a0];&#xA;[0:v]trim=6.87858:10.2093,setpts=PTS-STARTPTS[v1];&#xA;[0:a]atrim=6.87858:10.2093,asetpts=PTS-STARTPTS[a1];&#xA;[0:v]trim=10.5683:11.5989,setpts=PTS-STARTPTS[v2];&#xA;[0:a]atrim=10.5683:11.5989,asetpts=PTS-STARTPTS[a2];&#xA;[0:v]trim=11.9066:13.2301,setpts=PTS-STARTPTS[v3];&#xA;[0:a]atrim=11.9066:13.2301,asetpts=PTS-STARTPTS[a3];&#xA;[0:v]trim=14.2123:14.903,setpts=PTS-STARTPTS[v4];&#xA;[0:a]atrim=14.2123:14.903,asetpts=PTS-STARTPTS[a4];&#xA;[0:v]trim=15.2467:16.5819,setpts=PTS-STARTPTS[v5];&#xA;[0:a]atrim=15.2467:16.5819,asetpts=PTS-STARTPTS[a5];&#xA;[0:v]trim=17.1012:20.1223,setpts=PTS-STARTPTS[v6];&#xA;[0:a]atrim=17.1012:20.1223,asetpts=PTS-STARTPTS[a6];&#xA;[0:v]trim=20.9504:22.5714,setpts=PTS-STARTPTS[v7];&#xA;[0:a]atrim=20.9504:22.5714,asetpts=PTS-STARTPTS[a7];&#xA;[0:v]trim=23.4482:24.8745,setpts=PTS-STARTPTS[v8];&#xA;[0:a]atrim=23.4482:24.8745,asetpts=PTS-STARTPTS[a8];&#xA;[0:v]trim=25.5697:26.8718,setpts=PTS-STARTPTS[v9];&#xA;[0:a]atrim=25.5697:26.8718,asetpts=PTS-STARTPTS[a9];&#xA;[0:v]trim=27.5758:27.9942,setpts=PTS-STARTPTS[v10];&#xA;[0:a]atrim=27.5758:27.9942,asetpts=PTS-STARTPTS[a10];&#xA;[0:v]trim=28.5431:30,setpts=PTS-STARTPTS[v11];&#xA;[0:a]atrim=28.5431:30,asetpts=PTS-STARTPTS[a11];&#xA;[v0][a0][v1][a1][v2][a2][v3][a3][v4][a4][v5][a5][v6][a6][v7][a7][v8][a8][v9][a9][v10][a10][v11][a11]&#xA;concat=n=12:v=1:a=1[outv][outa]&#xA;

    &#xA;

    The command that I am using to run ffmpeg with this script file is as below

    &#xA;

    ffmpeg -i "input.mp4" -c:v libx265 -filter_complex_script "script.txt" -map [outv] -map [outa] "output.txt"&#xA;

    &#xA;

    But, after running this command, the output video file is only 5 seconds long (the very first portion of the input video).

    &#xA;

    As we can see from the first line of the "script.txt", the start timestamp is 0.8 & the end timestamp is 5.8 giving a duration of 5s for the first piece of the output video.

    &#xA;

    So, ffmpeg is not joining any other parts of the input video after the first part. I am unable to understand why so.

    &#xA;

    I am facing this problem since the past couple of months after i updated ffmpeg. The same syntax of script file & command line used to work fine before that. Did any changes in the update alter the syntax that ffmpeg expects for this task ?

    &#xA;

    Any help is much appreciated.

    &#xA;

  • DXGI Desktop Duplication : encoding frames to send them over the network

    13 novembre 2016, par prazuber

    I’m trying to write an app which will capture a video stream of the screen and send it to a remote client. I’ve found out that the best way to capture a screen on Windows is to use DXGI Desktop Duplication API (available since Windows 8). Microsoft provides a neat sample which streams duplicated frames to screen. Now, I’ve been wondering what is the easiest, but still relatively fast way to encode those frames and send them over the network.

    The frames come from AcquireNextFrame with a surface that contains the desktop bitmap and metadata which contains dirty and move regions that were updated. From here, I have a couple of options :

    1. Extract a bitmap from a DirectX surface and then use an external library like ffmpeg to encode series of bitmaps to H.264 and send it over RTSP. While straightforward, I fear that this method will be too slow as it isn’t taking advantage of any native Windows methods. Converting D3D texture to a ffmpeg-compatible bitmap seems like unnecessary work.
    2. From this answer : convert D3D texture to IMFSample and use MediaFoundation’s SinkWriter to encode the frame. I found this tutorial of video encoding, but I haven’t yet found a way to immediately get the encoded frame and send it instead of dumping all of them to a video file.

    Since I haven’t done anything like this before, I’m asking if I’m moving in the right direction. In the end, I want to have a simple, preferably low latency desktop capture video stream, which I can view from a remote device.

    Also, I’m wondering if I can make use of dirty and move regions provided by Desktop Duplication. Instead of encoding the frame, I can send them over the network and do the processing on the client side, but this means that my client has to have DirectX 11.1 or higher available, which is impossible if I would want to stream to a mobile platform.