Recherche avancée

Médias (0)

Mot : - Tags -/content

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (45)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

Sur d’autres sites (8138)

  • Live Video Encoding and Streaming on a Webpage

    9 mars 2016, par Ockhius

    I am trying to show live webcam video stream on webpage and I have a working draft. However, I am not satisfied with the performance and looking for a better way to do the job.

    I have a webcam connected to Raspberry PI and a web server which is a simple python-Flask server. Webcam images are captured by using OpenCV and formatted as JPEG. Later, those JPEGs are sent to one of the server’s UDP ports. What I did up to this point is something like a homemade MJPEG(motion-jpeg) streaming.

    At the server-side I have a simple python script that continuously reads UDP port and put JPEG image in the HTML5 canvas. That is fast enough to create a perception of a live stream.

    Problems :

    • This compress the video very little. Actually it does not compress the video. It only decreases the size of a frame by formatting as JPEG.

    • FPS is low and also quality of the stream is not that good.

    • It is not a major point for now but UDP is not a secure way to stream video.

    • Server is busy with image picking from UDP. Needs threaded server design.

    Alternatives :

    • I have used FFMPEG before to convert video formats and also stream pre-recorded video. I guess, it is possible to encode(let say H.264) and stream WebCam live video using ffmpeg or avconv. (Encoding)

    Is this applicable on Raspberry PI ?

    • VLC is able to play live videos streamed on network. (Stream)

    Is there any Media Player to embed on HTML/Javascript to handle
    network stream like the VLC does ?

    • I have read about HLS (HTTP Live Stream) and MPEG-DASH.

    Does these apply for this case ? If it does,how should I use them ?

    Is there any other way to show live stream on webpage ?

    • RTSP is a secure protocol.

    What is the best practice for transport layer protocol in video
    streaming ?

  • Stream RTSP into Unity3D object [closed]

    28 novembre 2019, par Sokhon Chhun

    As of now I’m working on Unity project which is to stream rtsp ://XX.XX.XX.XX admin:xxxx pass:xxxx from IP camera. And also I have worked with RTSPClientSharp which I could stream only the frame so My question is how to stream rtsp :// protocol into unity3D object using FFmpeg(if any).

  • Opening the ALSA microphone port using libavformat

    17 mai 2013, par Kartik Aiyer

    I'm writing a fairly simple application to open the microphone port using the ALSA format on a linux VM. I'm trying to do this using libavformat and other ffmpeg libs ( avutil, avcodec, etc ).

    I followed what ffmpeg was doing ( ffmpeg_opt.c : open_input_file) to open the actual microphone port( hw:0,0). I find the ALSA format using av_find_input_format( "alsa" ) which returns with a AVInputFormat* object.

    I then use this with avformat_open_input.
    I didn't precreate AVFormatContext and load it with any params before the above call ( I rely on it being created in avformat_open_input). This worked and I was able to read packets of the port. However, after a while I was not able to open the port. I added a bunch of debug in the libs and find out that :
    avformat_open_input -> init_input -> ... -> avio_open2 -> ffurl_open ->ffurl_alloc
    the final call to ffurl_alloc goes through all the protocols and tries to see if hw matches any given protocol. This fails.

    I was wondering if anyone has tried opening a mic with the ALSA format using libavformat and other libs, and if they have any tips on how to get this done. Specifically how does ffmpeg translate something like hw:0,0 to the actual device port ?

    P.S : I tried the following line on the command line app
    ffmpeg -f alsa -i hw:0,0 -ac 2 -acodec pcm_s16le output.aac
    this worked for a while and then suddenly stopped working as well with the same kind of problem. Unable to open the protocol.

    I'm not sure what changed and why its not able to open the audio port, but any advice on how also port are opened with the rather different handle of HW:x,x would be much appreciated.

    Alternatively, I realize ffmpeg is a little heavy to just read PCM packets and then encode and I could use a more direct approach by using the aac libs directly. Can anyone recommend a lib or kernel driver that I can use to open the microphone ( given that it is an ALSA card )and read PCM data out of it. ( I'm new to LInux so please forgive me if any of this sounds trivial )

    Thanks
    Kartik