Recherche avancée

Médias (0)

Mot : - Tags -/gis

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (96)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (9254)

  • Cleaning up ffmpeg output ?

    29 août 2022, par Chris

    I am using ffmpeg to restream a stream.

    


    ffmpeg -loglevel panic -hide_banner -http_proxy *proxy* -i *link* -vcodec copy -acodec copy -f mpegts pipe:


    


    The output is streamed using flask in python.

    


    VLC never has a problem playing the streams, but some programs can fail.
TVHeadend sometimes reports things like...

    


    libav: AVFormatContext: Could not find codec parameters for stream 0 (Video: h264, 1 reference frame ([27][0][0][0] / 0x001B), none(left)): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
libav: AVFormatContext: Could not find codec parameters for stream 1 (Audio: aac ([15][0][0][0] / 0x000F), 0 channels): unspecified sample format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
libav: AVFormatContext: sample rate not set
libav: Unable to write header
libav: AVCodecContext: mmco: unref short failure
libav: AVCodecContext: number of reference frames (0+5) exceeds max (4; probably corrupt input), discarding one


    


    Its always at the start of the stream and it can take several retries before the stream starts playing. Each retry involves generating a fresh link and starting a stream over in my flask app... so it can feel like a long wait if it doesn't decide to work on the first (or fifth !) attempt.

    


    Is there any way to ensure the stream is presented in a way which even the fussiest client will be happy ?

    


  • JavaCV grab frame method delays and returns old frames

    27 janvier 2019, par Null Pointer

    I’m trying to create a video player in Java using JavaCV and its FFmpegFrameGrabber class. Simply, Inside a loop, I use

    .grab() to get a frame and then paint it on a panel

    Problem is, player gets delayed. For example, after 30 seconds passes, in video only 20 seconds passes.

    Source is ok. Other players can play the stream normally. Problem is possibly the long printing time.

    What I do not understand is that : "why does .grab() method brings me a frame from 10 seconds ago ?" Shouldn’t it just grab the frame which is being streamed at the moment ?

    (Sorry for not providing a working code, it’s all over different huge classes)

    I use the following grabber options (selected by some other colleague) :

    grabber.setImageHeight(480);
    grabber.setImageWidth(640);
    grabber.setOption("reconnect", "1");
    grabber.setOption("reconnect_at_eof", "1");
    grabber.setOption("reconnect_streamed", "1");
    grabber.setOption("reconnect_delay_max", "2");
    grabber.setOption("preset", "veryfast");
    grabber.setOption("probesize", "192");
    grabber.setOption("tune", "zerolatency");
    grabber.setFrameRate(30.0);
    grabber.setOption("buffer_size", "" + this.bufferSize);
    grabber.setOption("max_delay", "500000");
    grabber.setOption("stimeout", String.valueOf(6000000));
    grabber.setOption("loglevel", "quiet");
    grabber.start();

    Thanks

  • OpenCV returns an Empty Frame on video.read

    20 mars 2019, par Ikechukwu Anude

    Below is the relevant code

    import cv2 as cv
    import numpy as np

    video = cv.VideoCapture(0) #tells obj to use built in camera\

    #create a face cascade object
    face_cascade  =
    cv.CascadeClassifier(r"C:\Users\xxxxxxx\AppData\Roaming\Python\Python36\site-
    packages\cv2\data\haarcascade_frontalcatface.xml")

    a = 1
    #create loop to display a video
    while True:
       a = a + 1
       check, frame = video.read()
       print(frame)

       #converts to a gray scale img
       gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)

       #create the faces
       faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)

       for(x, y, w, h) in faces:
           print(x, y, w, h)

       #show the image
       cv.imshow('capturing', gray)

       key = cv.waitKey(1) #gen a new frame every 1ms

       if key == ord('q'): #once you enter 'q' the loop will be exited
           break

    print(a) #this will print the number of frames

    #captures the first frame
    video.release() #end the web cam

    #destroys the windows when you are not defined
    cv.destroyAllWindows()

    The code displays a video captured from my webcam camera. Despite that, OpevCV doesn’t seem to be processing any frames as all the frames look like this

    [[0 0 0]
     [0 0 0]
     [0 0 0]
     ...
     [0 0 0]
     [0 0 0]
     [0 0 0]]]

    which I assume means that they are empty.

    This I believe is preventing the algorithm from being able to detect my face in the frame. I have a feeling that the issue lies in the ffmpeg codec, but I’m not entirely sure how to proceed even if that is the case.

    OS : Windows 10
    Language : Python

    Why is the frame empty and how can I get OpenCV to detect my face in the frame ?