Recherche avancée

Médias (91)

Autres articles (98)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (9463)

  • Stream microphone from client browser to remote server and pass audio in real time to ffmpeg to combine with a second video source

    4 mai 2021, par fakeguybrushthreepwood

    As a beginner at working with these kinds of real-time streaming services, I've spent hours trying to work out how this is possible, but can't seem to work out I'd precisely go about it.

    


    I'm prototyping a personal basic web app that does the following :

    


      

    1. In a web browser, the web application has a button that says 'Stream Microphone' - when pressed it streams the audio from the user's microphone (the user obviously has to consent to give permission to send their microphone audio) through to the server which I was presuming would be running node.js (no specific reason at this point, just thought this is how I'd go about doing it).

      


    2. 


    3. The server receives the audio close enough to real-time somehow (not sure how I'd do this).

      


    4. 


    5. I can then run ffmpeg on the command line and take the real-time audio coming in real-time and add it as the sound to a video file (let's just say I'm going to play testmovie.mp4) that I want to play.

      


    6. 


    


    I've looked at various solutions - such as maybe using WebRTC, RTP/RTSP, Piping audio into ffmpeg, Gstreamer, Kurento, Flashphoner and/or Wowza - but somehow they look overly complicated and usually seem to focus on video along with audio. I just need to work with audio.

    


  • Extended client ownership of MediaCodec encoder output buffers for RTMP streaming

    13 février 2014, par dbro

    Background :

    I've connected Android's MediaCodec to FFmpeg for muxing a variety of formats not supported by MediaMuxer, including rtmp:// output via a .flv container. Such streaming muxers require longer, unpredictable ownership of MediaCodec's output buffers, as they may perform networking I/O on any packet processing step. For my video stream, I'm using MediaCodec configured for Surface input. To decouple muxing from encoding, I queue MediaCodec's ByteBuffer output buffers to my muxer via a Handler.

    All works splendidly if I mux the .flv output to file, rather than rtmp endpoint.

    Problem :

    When muxing to rtmp://... endpoint I notice my streaming application begins to block on calls to eglSwapBuffers(mEGLDisplay, mEncodingEGLSurface) at dequeueOutputBuffer() once I'm retaining even a few MediaCodec output buffers in my muxing queue as MediaCodec seems to be locked to only 4 output buffers.

    Any tricks to avoid copying all encoder output returned by MediaCodec#dequeueOutputBuffers and immediately calling releaseOutputBuffer(...) ?

    The full source of my project is available on Github. Specifically, see :

    • AndroidEncoder.java : Abstract Encoder class with shared behavior between Audio and Video encoders : mainly drainEncoder(). Writes data to a Muxer instance.
    • FFmpegMuxer.java : Implements Muxer
    • CameraEncoder.java. Sends camera frames to an AndroidEncoder subclass configured for Video encoding.

    Systrace

    Systrace output

    Here's some systrace output streaming 720p @ 2Mbps video to Zencoder.

    Solved

    Copying then releasing the MediaCodec encoder output ByteBuffers as soon as they're available solves the issue without significantly affecting performance. I recycle the ByteBuffer copies in an ArrayDeque<bytebuffer></bytebuffer> for each muxer track, which limits the number of allocations.

  • libavfilter/vf_vpp : Add common filters of the qsv vpp

    24 août 2017, par Huang, Zhengxu
    libavfilter/vf_vpp : Add common filters of the qsv vpp
    

    Add common filters of the qsv vpp features including scale,denosie,
    deinterlace,frc,crop and procAmp.

    Performance will be significantly reduced in the test if using cascade
    mode just like qsv framerate + qsv scale + qsv deinterlace + qsv denoise in
    separated way no matter in system or video memmory cases.
    And the code is so redundant because so much the same just as session and
    surface’s creation and management.
    So we add a common qsv filter.

    Usage :
    - hwaccel qsv -c:v h264_qsv -r 25 -i in -vf
    vpp_qsv=w=iw/2:h=400:deinterlace=1:framerate=60:detail=50:denoise=50
    - b 2M -maxrate 3M -c:v h264_qsv -y out.h264

    Signed-off-by : ChaoX A Liu <chaox.a.liu@gmail.com>
    Signed-off-by : Zhengxu Huang <zhengxu.maxwell@gmail.com>
    Signed-off-by : Andrew Zhang <huazh407@gmail.com>
    Change-Id : I130392ce722138c209ab658c5f03f0009b6e8024
    Signed-off-by : Maxym Dmytrychenko <maxim.d33@gmail.com>

    • [DBH] configure
    • [DBH] libavfilter/Makefile
    • [DBH] libavfilter/allfilters.c
    • [DBH] libavfilter/vf_vpp_qsv.c