Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (74)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (9160)

  • Demuxing and decoding raw RTP with libavformat

    8 février 2023, par kevmo314

    I'm implementing a pipeline where I receive inbound RTP packets in memory but I'm having trouble figuring out how to set up libavformat to handle/unwrap the RTP packets.

    


    I have all relevant information about the underlying codec necessary but since it's h264 I cannot simply strip the RTP header trivially. I create the input context with goInputFunction writing one packet per invocation.

    


    void *readbuf = av_malloc(1500);
AVIOContext *avioreadctx = avio_alloc_context(readbuf, 1500, 0, transcoder, &goInputFunction, NULL, NULL);
AVFormatContext *inputctx = avformat_alloc_context();
inputctx->pb = avioreadctx;
inputctx->flags |= AVFMT_FLAG_CUSTOM_IO;


    


    When I open it with avformat_open_input(&inputctx, NULL, NULL, NULL) it repeatedly calls the read function but doesn't actually progress. I suspect because the RTP stream itself does not have enough information to fully describe the codec ? If I leave this open out, then av_read_frame(inputctx, input_packet) down the road segfaults, I'm guessing because the input context is uninitialized.

    


    So to my question, is it possible to set the codec details that the SDP would typically set, but manually ?

    


    I'm looking for an example of how to manually configure the AVFormatContext to consume RTP packets without an SDP and setting up a UDP port listener.

    


  • lavf/gifdec : do not mark as notimestamps

    28 septembre 2023, par Anton Khirnov
    lavf/gifdec : do not mark as notimestamps
    

    The demuxer does not set packet timestamps itself after
    c6b6356635f598b095606cd126f31bc6ab916225 and instead relies on the
    parser to do it. However, this does not matter from the caller
    perspective as it still happens inside the demuxer. The demuxer should
    thus not be flagged as not having timestamps.

    • [DH] libavformat/gifdec.c
  • MEAN stack express.js video uploader/converter

    5 janvier 2017, par MattJ

    The idea is a social site where people can upload their videos. I am planning to use multer for uploading (limiting by size and by mimetype). Then for performance and mostly storage purposes to use fluent-ffmpeg and convert it to mp4 format and store it somewhere on the server with a reference in mongodb. Since I do not want the user to wait while the whole process is done, I plan to separate it into to main parts :

    1. Uploading
    2. Converting and storing.

    Where the user uploads the file and then some separate node process ( using node-schedule) which run checks every 1 min. or so to convert all files in the directory and after that adds the references to mongodb. So what do you guys think ? What is the best approach performance-wise ?