Recherche avancée

Médias (21)

Mot : - Tags -/Nine Inch Nails

Autres articles (91)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (9725)

  • How to efficiently store variable frame rate video stream in a pyqt application ?

    1er octobre 2024, par Jeroen De Geeter

    I am developing a PyQT (PySide6) application that needs to display and store multiple camera streams at the same time. The display of the camera streams goes well, however, storing these streams seems to slow down the application significantly up to a point where the GUI doesn't work fluently anymore.

    


    I have a minimal working example using a stub to demonstrate how my code currently works. However, given that it is a minimal working example, it will not visibly slow down.

    


    import sys
from time import sleep

import av
import numpy as np
import pyqtgraph as pg
from PySide6.QtCore import QThread, Signal, Slot, Qt
from PySide6.QtWidgets import QApplication, QHBoxLayout, QWidget, QVBoxLayout, QPushButton, QGroupBox


class RGBCameraStub(QThread):

    newFrame = Signal(np.ndarray)

    def __init__(self):
        super().__init__()
        self.killSwitch = True

    def stop(self):
        self.killSwitch = False
        self.quit()
        self.wait()

    def run(self):
        self.killSwitch = True
        while self.killSwitch:
            self.newFrame.emit((np.random.rand(1456, 1080, 3) * 255).astype(np.uint8))
            sleep((20 + int(np.random.rand() * 30))/ 1000)


class VideoWriter(QThread):

    def __init__(self):
        super().__init__()
        self.output_container = av.open('output_video.mkv', mode='w')
        self.stream = self.output_container.add_stream('ffv1', rate=None)
        self.stream.width = 1456
        self.stream.height = 1080
        self.stream.pix_fmt = 'yuv420p'

    @Slot(np.ndarray)
    def addFrame(self, frame: np.ndarray):
        av_frame = av.VideoFrame.from_ndarray(frame, format='rgb24')
        av_frame.pts = None # Leave emtpy for auto-handling - variable framerate?
        for packet in self.stream.encode(av_frame):
            self.output_container.mux(packet)

    def stop(self):
        self.output_container.close()
        self.quit()
        self.wait()

    def run(self):
        self.exec()


class VideoBox(QGroupBox):

    def __init__(self, title):
        super().__init__(title=title)
        self.createLayout()
        self.videoWidget.setImage((np.random.rand(1456, 1080, 3) * 255).astype(np.uint8))

    def createLayout(self):
        layout = QVBoxLayout()
        self.videoWidget = pg.RawImageWidget()
        layout.addWidget(self.videoWidget)
        self.setLayout(layout)
        self.setStyleSheet("""QGroupBox {
            border: 1px solid #494B4F;
            margin-top: 8px;
            min-width: 180px;
            min-height: 180px;
            padding: 2px 0px 0px 0px;
            }
        QGroupBox::title {
            color: #aeb0b8;
            subcontrol-origin: margin;
            subcontrol-position: top left;
            left: 20px;
            padding: 0 8px;
        }""")

    def setImage(self, data: np.ndarray):
        self.videoWidget.setImage(data)

class MainWindow(QWidget):

    closeSignal = Signal()

    def __init__(self):
        super().__init__()
        self.setGeometry(0, 0, 900, 720)
        self.createLayout()

    def createLayout(self):
        self.vimbaImage = VideoBox("RGB")
        self.info = self.infoLayout()

        layout = QVBoxLayout()
        layout.addWidget(self.vimbaImage)
        layout.addWidget(self.info)
        self.setLayout(layout)

        self.setAttribute(Qt.WA_StyledBackground, True)
        self.setStyleSheet("MainWindow { background-color: #1e1f22; }")

    def infoLayout(self):
        widget = QWidget()
        layout = QVBoxLayout()

        rgbButtonWidget = QWidget()
        buttonLayout = QHBoxLayout()
        self.connectButton = QPushButton('Connect', parent=self)
        self.disconnectButton = QPushButton('Disconnect', parent=self)
        buttonLayout.addWidget(self.connectButton)
        buttonLayout.addWidget(self.disconnectButton)
        buttonLayout.addStretch()
        rgbButtonWidget.setLayout(buttonLayout)
        layout.addWidget(rgbButtonWidget)

        widget.setLayout(layout)
        return widget

    def closeEvent(self, event):
        self.closeSignal.emit()
        event.accept()



if __name__ == "__main__":
    app = QApplication(sys.argv)

    rgbCamera = RGBCameraStub()
    videoWriter = VideoWriter()
    videoWriter.start()

    main_window = MainWindow()

    # Button connections
    main_window.connectButton.clicked.connect(rgbCamera.start)
    main_window.disconnectButton.clicked.connect(rgbCamera.stop)
    # main_window.disconnectButton.clicked.connect(videoWriter.stop)

    # Display frames
    rgbCamera.newFrame.connect(main_window.vimbaImage.setImage)

    # Write frame to file
    rgbCamera.newFrame.connect(videoWriter.addFrame)

    # Close application
    main_window.closeSignal.connect(rgbCamera.stop)
    main_window.closeSignal.connect(videoWriter.stop)

    main_window.show()
    sys.exit(app.exec())



    


    My question(s) therefore are :

    


      

    • How can I increase the performance of the VideoWriter ? I am currently adding frame by frame as soon as the camera thread provides a new frame. Maybe this is not the best approach ?
    • 


    • The frame rate of the camera is not completely stable, I therefore set av_frame.pts = None but maybe this is also not the approach to take ?
    • 


    • With code as is, the resulting media file quickly blows up in size, is there a way of dealing with this without quality loss ?
    • 


    


    As a side not, I currently use the PyAV wrapper for the FFmpeg libraries, however I am open to other suggestions.

    


  • Documented ffmpeg commands not recognized by ffmpeg

    2 avril 2020, par agconti

    I'm trying to use options like, ldash and http_opts, as the dash muxer docs describe but FFmpeg doesn't recognize them. I'm on the latest released version of ffmpeg, v4.2.2. I see the changes in the ffmpeg master branch but not in the v4.2 release branch. Does ffmpeg not recognize them because they haven't been released yet ?

    



    Here's the dash muxer docs for reference : https://ffmpeg.org/ffmpeg-all.html#dash-2

    



    Here's a minimal example command with uncut output :

    



    Andrews-MacBook-Pro :: dev/test ‹master› » ffmpeg -re -i test.mp4 \                                   
-map 0 -map 0 -c:a libfdk_aac -c:v libx264 \
-b:v:0 800k -b:v:1 300k -s:v:1 320x170 -profile:v:1 baseline \
-profile:v:0 main -bf 1 \
-b_strategy 0 -ar:a:1 22050 \
-adaptation_sets "id=0,streams=v id=1,streams=a" \
-ldash 1 \
-f dash ./output/out.mpd

ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.2_2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Unrecognized option 'ldash'.
Error splitting the argument list: Option not found


    


  • Efficient real-time video stream processing and forwarding with RTMP servers

    19 mai 2023, par dumbQuestions

    I have a scenario where I need to retrieve a video stream from an RTMP server, apply image processing (specifically, adding blur to frames), and then forward the processed stream to another RTMP server (in this case, Twitch).

    


    Currently, I'm using ffmpeg in conjunction with cv2 to retrieve and process the stream. However, this approach introduces significant lag when applying the blur. I'm seeking an alternative method that can achieve the desired result more efficiently. I did attempt to solely rely on ffmpeg for the entire process, but I couldn't find a way to selectively process frames based on a given condition and subsequently transmit only those processed frames.

    


    Is there a more efficient approach or alternative solution that can address this issue and enable real-time video stream processing with minimal lag ?

    


    Thanks in advance !

    


    def forward_stream(server_url, stream_key, twitch_stream_key):
    get_ffmpeg_command = [...]

    send_ffmpeg_command [...]

    # Start get FFmpeg process
    read_process = subprocess.Popen(get_ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)

    # Start send FFmpeg process
    send_process = send_process = subprocess.Popen(send_ffmpeg_command, stdin=subprocess.PIPE, stderr=subprocess.DEVNULL)

    # Open video capture
    cap = cv2.VideoCapture(f'{server_url}')

    while True:
        # Read the frame
        ret, frame = cap.read()
        if ret:
            # Apply machine learning algorithm
            should_blur = machine_learning_algorithm(frame)

            # Apply blur if necessary
            if machine_learning_algorithm(frame):
                frame = cv2.blur(frame, (25, 25))

            # Write the frame to FFmpeg process
            send_process.stdin.write(frame.tobytes())
        else:
            break

    # Release resources
    cap.release()
    read_process.stdin.close()
    read_process.wait()