Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (66)

Sur d’autres sites (13939)

  • Capture xvfb stream from docker container using webRTC [closed]

    5 septembre 2023, par HIMANSHU SAHU

    I have a windows game .exe file and running the game inside a docker container (In EC2 linux VM) using xvfb version of X server. Now I want to Capture X server stream from docker container to my local system using webRTC. What should I do for the same ?

    


    Here's my Dockerfile :

    


    FROM ubuntu:22.04

    


    #Specify a workdir, to better organize your files inside the container.

    


    WORKDIR /app

    


    #Update package lists and install required packages

    


    RUN apt-get update
RUN apt-get install -y wget software-properties-common gnupg2 winbind xvfb
RUN apt-get -qq update -y && apt-get -qq install -y —no-install-recommends \
build-essential cpp cpp-9 g++-9 gcc-9 gcc-10 g++-10 gcc-multilib gcc-mingw-w64
    
git-core
    
dkms
    
ffmpeg

    


    #Add Wine repository and install

    


    RUN dpkg —add-architecture i386
RUN mkdir -pm755 /etc/apt/keyrings
RUN wget -O /etc/apt/keyrings/winehq-archive.key https://dl.winehq.org/wine-builds/winehq.key
RUN wget -NP /etc/apt/sources.list.d/ https://dl.winehq.org/wine-builds/ubuntu/dists/jammy/winehq-jammy.sources
RUN apt-get update

    


    RUN apt-get -qq update -y && apt-get -qq install -y —no-install-recommends
    
alsa-utils
    
libasound2-dev libdbus-1-dev libfontconfig-dev libfreetype-dev libgnutls28-dev libldap-common
    
libodbc1 libv4l-0 libjpeg-dev libldap2-dev libpng-dev libtiff-dev libgl-dev libunwind-dev libxml2-dev
    
libxslt1-dev
    
libfaudio-dev
    
libmpg123-dev
    
libosmesa6-dev
    
libsdl2-dev
    
libudev-dev
    
libvkd3d-dev \
libvulkan-dev

    


    RUN apt-get -qq update -y && apt-get -qq install -y —no-install-recommends
    
ocl-icd-opencl-dev
    
bison
    
schroot
    
debootstrap
    
flex

    


    RUN apt-get -qq update -y && apt-get -qq install -y —no-install-recommends
    
libmpg123-dev:i386
    
libosmesa6-dev:i386
    
libvulkan-dev:i386
    
ocl-icd-opencl-dev:i386
    
bison:i386
    
flex:i386

    


    RUN apt-get install -y linux-firmware

    


    #Install additional packages and configure Wine

    


    RUN apt-get install —no-install-recommends -y winehq-stable winetricks cabextract

    


    RUN winetricks msxml6

    


    #Cleanup unnecessary files

    


    RUN apt-get clean
RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

    


    ENV WINEDEBUG=-d3d

    


    COPY app /root/catalyst
COPY overcooked_drm-free_20_566540 /root/overcooked_drm-free_20_566540
COPY startup.sh /root/startup.sh
RUN chmod 755 /root/startup.sh

    


    EXPOSE 9000

    


    CMD ["/root/startup.sh"]

    


    Startup.sh :

    


    # !/usr/bin/env bash

    


    sleep 1s
Xvfb :1 -screen 0 1024x768x24 &

    


    glxinfo -force-d3d9
ffmpeg -video_size 1024x768 -framerate 25 -f x11grab -i :1 -t 120 outputofscreen1.mp4 &

    


    DISPLAY=:1 wine /root/overcooked_drm-free_20_566540/Overcooked.exe -force-d3d11

    



    


    As of now I'm capturing the xserver stream using ffmpeg and storing the same in .mp4 file. Now I want to play the game on my local machine by using webRTC for connection and streaming.

    


    I have run the .exe in linux using wine and xvfb server and captured the stream using ffmpeg and stored in a .mp4 file.

    


  • How to extract frames in sequence as PNG images from ffmpeg stream ?

    7 janvier, par JamesJGoodwin

    I'm trying to create a program that would capture my screen (a game to be precise) using ffmpeg and stream frames to NodeJS for live processing. So, if the game runs at 60 fps then I expect ffmpeg to send 60 images per second down to stdout. I've written a code for that

    


        import { spawn as spawnChildProcess } from 'child_process';

    const videoRecordingProcess = spawnChildProcess(
      ffmpegPath,
      [
        '-init_hw_device',
        'd3d11va',
        '-filter_complex',
        'ddagrab=0,hwdownload,format=bgra',
        '-c:v',
        'png',
        '-f',
        'image2pipe',
        '-loglevel',
        'error',
        '-hide_banner',
        'pipe:',
      ],
      {
        stdio: 'pipe',
      },
    );

    videoRecordingProcess.stderr.on('data', (data) => console.error(data.toString()));

    videoRecordingProcess.stdout.on('data', (data) => {
      fs.promises.writeFile(`/home/goodwin/genshin-repertoire-autoplay/imgs/${Date.now()}.bmp`, data);
    });


    


    Currently I'm streaming those images onto disk for debugging and it's almost working except that the image is cropped. Here's what's going on. I get 4 images saved on disk :

    


      

    1. Valid image that is 2560x1440, but only 1/4 or even 1/5 of the screen is present at the top, the remaining part of the image is empty (transparent)
    2. 


    3. Broken image that won't open
    4. 


    5. Broken image that won't open
    6. 


    7. Broken image that won't open
    8. 


    


    This pattern is nearly consistent. Sometimes it's 3, sometimes 4 or 5 images between valid images. What did I do wrong and how do I fix it ? My guess is that ffmpeg is streaming images in chunks, each chunk represents a part of the frame that was already processed by progressive scan. Though I'm not entirely sure if I should try and process it manually. There's gotta be a way to get fully rendered frames in one piece sequentially.

    


  • Batch splitting large audio files into small fixed-length audio files in moments of silence

    26 juillet 2023, par Haldjärvi

    to train the SO-VITS-SVC neural network, we need 10-14 second voice files. As a material, let's say I use phrases from some game. I have already made a batch script for decoding different files into one working format, another batch script for removing silence, as well as a batch script for combining small audio files into files of 13-14 seconds (I used Python, pydub and FFmpeg). To successfully automatically create a training dataset, it remains only to make one batch script - Cutting audio files lasting more than 14 seconds into separate files lasting 10-14 seconds, cutting in places of silence or close to silence is highly preferable.

    


    So, it is necessary to batch cut large audio files (20 seconds, 70 seconds, possibly several hundred seconds) into segments of approximately 10-14 seconds, however, the main task is to look for the quietest place in the cut areas so as not to cut phrases in the middle of a word (this is not very good for model training). So, is it really possible to do this in a very optimal way, so that the processing of a 30-second file does not take 15 seconds, but is fast ? Quiet zone detection is required only in the area of cuts, that is, 10-14 seconds, if counted from the very beginning of the file.

    


    I would be very grateful for any help.

    


    I tried to write a script together with ChatGPT, but all options gave completely unpredictable results and were not even close to what I needed... I had to stop at the option with a sharp cut of files for exactly 14000 milliseconds. However, I hope there is a chance to make a variant with cutting exactly in quiet areas.

    


    import os
from pydub import AudioSegment

input_directory = ".../RemSilence/"
output_directory = ".../Split/"
max_duration = 14000

def split_audio_by_duration(input_file, duration):
    audio = AudioSegment.from_file(input_file)
    segments = []
    for i in range(0, len(audio), duration):
        segment = audio[i:i + duration]
        segments.append(segment)
    return segments

if __name__ == "__main__":
    os.makedirs(output_directory, exist_ok=True)
    audio_files = [os.path.join(input_directory, file) for file in os.listdir(input_directory) if file.endswith(".wav")]
    audio_files.sort(key=lambda file: len(AudioSegment.from_file(file)))
    for file in audio_files:
        audio = AudioSegment.from_file(file)
        if len(audio) > max_duration:
            segments = split_audio_by_duration(file, max_duration)
            for i, segment in enumerate(segments):
                output_filename = f"output_{len(os.listdir(output_directory))+1}.wav"
                output_file_path = os.path.join(output_directory, output_filename)
                segment.export(output_file_path, format="wav")
        else:
            output_filename = f"output_{len(os.listdir(output_directory))+1}.wav"
            output_file_path = os.path.join(output_directory, output_filename)
            audio.export(output_file_path, format="wav")