Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (48)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (9665)

  • how to make cv2.videoCapture.read() faster ?

    9 novembre 2022, par Yu-Long Tsai

    My question :

    



    I was working on my computer vision project. I use opencv(4.1.2) and python to implement it.

    



    I need a faster way to pass the reading frame into image processing on my Computer(Ubuntu 18.04 8 cores i7 3.00GHz Memory 32GB). the cv2.VideoCapture.read() read frame (frame size : 720x1280) will take about 120 140ms. which is too slow. my processing module take about 40ms per run. And we desire 25 30 FPS.

    



    here is my demo code so far :

    



    import cv2
from collections import deque
from time import sleep, time
import threading


class camCapture:
    def __init__(self, camID, buffer_size):
        self.Frame = deque(maxlen=buffer_size)
        self.status = False
        self.isstop = False
        self.capture = cv2.VideoCapture(camID)


    def start(self):
        print('camera started!')
        t1 = threading.Thread(target=self.queryframe, daemon=True, args=())
        t1.start()

    def stop(self):
        self.isstop = True
        print('camera stopped!')

    def getframe(self):
        print('current buffers : ', len(self.Frame))
        return self.Frame.popleft()

    def queryframe(self):
        while (not self.isstop):
            start = time()
            self.status, tmp = self.capture.read()
            print('read frame processed : ', (time() - start) *1000, 'ms')
            self.Frame.append(tmp)

        self.capture.release()

cam = camCapture(camID=0, buffer_size=50)
W, H = 1280, 720
cam.capture.set(cv2.CAP_PROP_FRAME_WIDTH, W)
cam.capture.set(cv2.CAP_PROP_FRAME_HEIGHT, H)


# start the reading frame thread
cam.start()

# filling frames
sleep(5)

while True:
  frame = cam.getframe() # numpy array shape (720, 1280, 3)

  cv2.imshow('video',frame)
  sleep( 40 / 1000) # mimic the processing time

  if cv2.waitKey(1) == 27:
        cv2.destroyAllWindows()
        cam.stop()
        break



    



    What I tried :

    



      

    1. multiThread - one thread just reading the frame, the other do the image processing things.
It's NOT what I want. because I could set a buffer deque saving 50 frames for example. but the frame-reading thread worked with the speed frame/130ms. my image processing thread worked with the speed frame/40ms. then the deque just running out. so I've been tried the solution. but not what I need.

    2. 


    3. this topic is the discussion I found out which is most closest to my question. but unfortunately, I tried the accepted solutions (both of two below the discussion).

    4. 


    



    One of the solution (6 six thumbs up) point out that he could reading and saving 100 frames at 1 sec intervals on his mac. why my machine cannot handle the frame reading work ? Do I missing something ? my installation used conda and pip conda install -c conda-forge opencv, pip install opencv-python(yes, I tried both.)

    



    The other of the solution(1 thumb up) using ffmpeg solution. but it seem's work with video file but not camera device ?

    



      

    1. adjust c2.waitKey() : 
the parameter just controls the frequency when video display. not a solution.
    2. 


    



    Then, I know I just need some keywords to follow.

    



    code above is my demo code so far, I want some method or guide to make me videoCapture.read() faster. maybe a way to use multithread inside videoCapture object or other camera reading module.

    



    Any suggestions ?

    


  • Loss packets from RTP streaming decoded with ffmpeg

    29 avril 2020, par anaVC94

    Hope someone could help me. I am trying to apply a neural network (YOLOv2) to perform object detection to a RTP stream which I have simulated in local using VLC, with the "using RTP over TCP" mark as true.
The stream is 4K, 30fps, 15Mb/s. I am using OpenCV C API I/O module to read frames from the stream.

    



    I am using the common code. I open the RTP as follows :
cap = cvCaptureFromFile(url);

    



    And then I perform capture in one thread like :
IplImage* src = cvQueryFrame(cap);

    



    and, on another thread, the detection part.

    



    I know OpenCV uses ffmpeg to capture video. I am using ffmpeg 3.3.2. My problem is that when I receive the stream, a lot of artifacts appear. The output I get is :

    



    top block unavailable for requested intra mode -1
[h264 @ 0000016ae36f0a40] error while decoding MB 40 0, bytestream 81024
[h264 @ 0000016ae32f3860] top block unavailable for requested intra mode
[h264 @ 0000016ae32f3860] error while decoding MB 48 0, bytestream 102909
[h264 @ 0000016ae35e9e60] Reference 3 >= 3
[h264 @ 0000016ae35e9e60] error while decoding MB 79 0, bytestream 27231
[h264 @ 0000016a7033eb40] mmco: unref short failure
[h264 @ 0000016ae473ee00] Reference 3 >= 3


    



    over and over again, and there are too many packet losses that I can't see anything when showing the received frames. However, it doesn't happen to me when I stream over RTP other lower quality videos such as HD in 30fps or like that. It is also true that the 4K has a lot of movement (it is a Moto GP Race).

    



    I have tried :
- Reduce the fps of the streaming.
- Reduce the bitrate 
- ffplay either doesn't show correctly the input frames, but VLC does (don't know why).
- Force TCP transmission.
- Force input fps doing cvSetCaptureProperty(cap, CV_CAP_PROP_FPS, 1);

    



    Why is this happening and how can I improve the packet losses ? Is there any way or something else I can try ?

    



    Thanks a lot

    


  • MovieWriter FFwriter unavailable. Trying to use pillow instead

    17 février 2020, par Life is Good

    I have created a bar chart race in Matplotlib and now I am trying to save it into a gif file. I have imported the relevant library :

    import matplotlib.animation as animation
    plt.rcParams['animation.ffmpeg_path'] = '‪C:\\Program Files\\FFmpeg\\bin\\ffmpeg.exe'
    FFwriter = animation.FFMpegWriter()

    Here is the code I used to create my animation :

    fig, ax = plt.subplots(figsize=(15, 8))
    animator = animation.FuncAnimation(fig, draw_barchart, frames=range(1999, 2015))
    HTML(animator.to_jshtml())

    However, when I searched for the writers, there are only two of them :

    animation.writers.list()

    enter image description here

    I had already installed FFmpeg following a tutorial on Wikihow, and I am also able to run FFmpeg from the command line, so I don’t understand why it is not showing up. When I try to save my animation into a gif, I get this error message :
    enter image description here

    Would anybody be familiar with this error message, please ? Thank you very much.