Recherche avancée

Médias (91)

Autres articles (74)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (10330)

  • Android HLS : Any way to edit the m3u8 file so that I know which segment is currently streaming/playing in my player using HLS

    1er août 2014, par hclee

    I am now working on a VOD project using HLS.

    I use the VideoViewBuffer in VitamioDemo to stream my video that is stored in my local server.
    The Vitamio library is awesome that I am able to stream the video, getting the bit rate, buffering percentage and some metadata.

    We use ffmpeg to convert the video into m3u8 and the corresponding ts files.
    But now our team want to know which segment (which ts file) of the video the HLS is currently streaming.

    That’s a very important part in our project but we get stuck in this point.

    I tried to use the MediaMetadata in Vitamio but only the Duration of the video is found.
    I am wondering if we can add some metadata in the m3u8 file so that we can retrieve the name of the current segment during streaming.
    eg :
    The original m3u8 is like this :
    #EXTINF:10.500000,
    stream00000.ts

    Is it possible for me to change it as follows :
    #EXTINF:10.500000, name of segment
    stream00000.ts

    But all I can get using MediaMetaDataRetriever is null except for the duration.

    It seems that no body have done this before so I can’t find any very useful information about this.

    Do anybody how to implement this ?
    Or should I use some packet sniffer to monitor the network traffic by myself ?
    Or Would MediaScanner be helpful ?
    Or I need to use code in android.os ?

    Thanks in advance !

  • How can I quantitatively measure gstreamer H264 latency between source and display ?

    16 décembre 2014, par KevinM

    I have a project where we are using gstreamer , x264, etc, to multicast a video stream over a local network to multiple receivers (dedicated computers attached to monitors). We’re using gstreamer on both the video source (camera) systems and the display monitors.

    We’re using RTP, payload 96, and libx264 to encode the video stream (no audio).

    But now I need to quantify the latency between (as close as possible to) frame acquisition and display.

    Does anyone have suggestions that use the existing software ?

    Ideally I’d like to be able to run the testing software for a few hours to generate enough statistics to quantify the system. Meaning that I can’t do one-off tests like point the source camera at the receiving display monitor displaying a high resolution and manually calculate the difference...

    I do realise that using a pure software-only solution, I will not be able to quantify the video acquisition delay (i.e. CCD to framebuffer).

    I can arrange that the system clocks on the source and display systems are synchronised to a high accuracy (using PTP), so I will be able to trust the system clocks (else I will use some software to track the difference between the system clocks and remove this from the test results).

    In case it helps, the project applications are written in C++, so I can use C event callbacks, if they’re available, to consider embedding system time in a custom header (e.g. frame xyz, encoded at time TTT - and use the same information on the receiver to calculate a difference).

  • FFMPEG + SDL : How To Show Multi Frames In Separate Region ?

    19 mai 2014, par user3051473

    I’m using ffmpeg and SDL to develop an camera monitor APP and I want to show four separate streams at the same time(via four rtsp source). As following figure out :

    Now I achieve this by setting the SDL display region, and I identify different display region by using the different SDL_Rect identifying by the variable i(from 1 - 4).

    But this cause the efficient problem. For every region, I need to scale to the whole screen and lock/unlock screen, then display.

    I’m wondering that, Can I merge 4 different AVFrame(come from REG1 - REG4) into the whole picture and then show this picture ?

    Hope that the describe about is detail enough for you to konw my problem.
    Also thanks for your help.

    -------------------
    -        -        -
    -  REG1  -  REG2  -
    -        -        -
    -------------------
    -        -        -
    -  REG3  -  REG4  -
    -        -        -
    -------------------

    public void ShowFrame(AVFrame *pFrame, AVCodecContext *pCodecContext)
    {
       SDL_LockYUVOverlay(pBmp);
       AVPicture pict;

       // TODO scale pFrame to pict
       // ...

       SDL_UnlockYUVOverlay(pBmp);

       // WIDTH and HEIGHT represent the whole screen size
       SDL_Rect rect;
       rect.x = (WIDTH / 2) * (i % 2);
       rect.y = (HEIGHT / 2) * (i > 1 ? 1: 0);
       rect.w = WIDTH / 2;
       rect.h = HEIGHT / 2;
       SDL_DisplayYUVOverlay(pBmp, &rect);
    }