Recherche avancée

Médias (1)

Mot : - Tags -/pirate bay

Autres articles (104)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Soumettre bugs et patchs

    10 avril 2011

    Un logiciel n’est malheureusement jamais parfait...
    Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
    Si vous pensez avoir résolu vous même le bug (...)

Sur d’autres sites (12343)

  • varying RTP stream result from custom SIP implementation

    1er février, par Nik Hendricks

    I am in the process of creating my own SIP implementation in Node.js. As well as a b2bua as a learning project.

    


    Finding people wise in the ways of SIP has proved to be difficult elsewhere but here I have had good results

    


    this is the GitHub of my library so far node.js-sip

    


    this is the GitHub of my PBX so far FlowPBX

    


    Currently, everything is working as I expect. Although I really have some questions on possible errors in my implementation.

    


    My main issue is with RTP streams. Currently I am utilizing ffmpeg.

    


    my function goes as follows

    


    start_stream(call_id, sdp){
        console.log('Starting Stream')
        let port = sdp.match(/m=audio (\d+) RTP/)[1];
        let ip = sdp.match(/c=IN IP4 (\d+\.\d+\.\d+\.\d+)/)[1];
        let codec_ids = sdp.match(/m=audio \d+ RTP\/AVP (.+)/)[1].split(' ');
        let ffmpeg_codec_map = {
            'opus': 'libopus',
            'PCMU': 'pcm_mulaw',
            'PCMA': 'pcm_alaw',
            'telephone-event': 'pcm_mulaw',
            'speex': 'speex',
            'G722': 'g722',
            'G729': 'g729',
            'GSM': 'gsm',
            'AMR': 'amr',
            'AMR-WB': 'amr_wb',
            'iLBC': 'ilbc',
            'iSAC': 'isac',
        }

        let codecs = [];
        sdp.split('\n').forEach(line => {
            if(line.includes('a=rtpmap')){
                let codec = line.match(/a=rtpmap:(\d+) (.+)/)[2];
                let c_id = line.match(/a=rtpmap:(\d+) (.+)/)[1];
                codecs.push({                    
                    name: codec.split('/')[0],
                    rate: codec.split('/')[1],
                    channels: codec.split('/')[2] !== undefined ? codec.split('/')[2] : 1,
                    id: c_id
                })
            }
        })

        console.log('codecs')
        console.log(codecs)

        let selected_codec = codecs[0]
        if(selected_codec.name == 'telephone-event'){
            selected_codec = codecs[1]
            console.log(selected_codec)
        }

        //see if opus is available
        codecs.forEach(codec => {
            if(codec.name == 'opus'){
                selected_codec = codec;
            }
        })

        if(selected_codec.name != 'opus'){
            //check if g729 is available
            codecs.forEach(codec => {
                if(codec.name == 'G729'){
                    selected_codec = codec;
                }
            })
        }

        console.log('selected_codec')
        console.log(selected_codec)

        let spawn = require('child_process').spawn;
        let ffmpegArgs = [
            '-re',
            '-i', 'song.mp3',
            '-acodec', ffmpeg_codec_map[selected_codec.name],
            '-ar', selected_codec.rate,
            '-ac', selected_codec.channels,
            '-payload_type', selected_codec.id,
            '-f', 'rtp', `rtp://${ip}:${port}`
        ];

        let ffmpeg = spawn('ffmpeg', ffmpegArgs);

        ffmpeg.stdout.on('data', (data) => {
            console.log(`stdout: ${data}`);
        });
        ffmpeg.stderr.on('data', (data) => {
            console.error(`stderr: ${data}`);
        });




}


    


    When using zoiper to test it works great. I have seen the mobile version negotiate speex
and the desktop version negotiate opus mostly for the codec.

    


    today I tried to register a grandstream phone to my pbx and the rtp stream is blank audio.
opus is available and I have tried to prefer that in my stream but still even when selecting that I cannot get audio to the grandstream phone. This is the same case for a yealink phone. I can only get zoiper to work so far.

    


    what could be causing this behavior ? there is a clear path of communication between everything just like the zoiper client's I have used.

    


    Additionally in my sip implementation,
how important is the concept of a dialog ? currently, I just match messages by Call-ID

    


    and then choose what to send based on the method or response. is there any other underlying dialog functionality that I may need to implement ?

    


    It would just be awesome to get someone who really knows what they are talking about eyes on some of my code to direct this large codebase in the right direction but I realize that a big ask lol.

    


  • Why is my FastAPI process being suspended, and how can I avoid this ?

    19 janvier, par blermen

    I'm working on a web app using FastAPI that uses ffmpeg to overlay audio onto video for the user. I'm running into an issue where, when I use subprocess.run(cmd), it automatically suspends the process running my FastAPI app. I can't figure out how to get the error logs to help deduce why this is, and I haven't found anything online talking about this.

    


    @app.get("/overlay-audio/")
async def get_video(audio_file: str, forged_name: Annotated[str, Query()] = "default"):
    video_path = os.path.join(output_path, "sample.mp4")
    audio_path = os.path.join(output_path, audio_file)
    forged_path = os.path.join(output_path, forged_name + ".mp4")
    print("Video path: " + video_path)
    print("Audio path: " + audio_path)
    print("Output path: " + forged_path)

    # command to recreate
    # ffmpeg -i input.mp4 -i input.wav -c:v copy -map 0:v:0 -map 1:a:0 -c:a aac -b:a 192k output.mp4

    cmd = ["/opt/homebrew/bin/ffmpeg", 
           "-i", video_path,
           "-i", audio_path,
           "-c:v", "copy",
           "-map", "0:v:0",
           "-map", "1:a:0",
           "-c:a", "aac",
           "-b:a", "192k",
           forged_path]
    
    subprocess.run(cmd)
           
    return {"forged_vid": f"forged_{forged_name}"}


if __name__ == "__main__":
    uvicorn.run("main:app", host="127.0.0.1", port=8000, reload=True)


    


    I've tried not writing output to the terminal, as I've read that could be a reason why it suspends using result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE), and I've also tried running it asynchronously to avoid blocking the event loop using

    


    result = await asyncio.create_subprocess_exec(
        *cmd,
        stdout=subprocess.PIPE,
        stderr=subprocess.PIPE
    )


    


    but nothing works. Any help or possible other ways to go about this would be greatly appreciated. Terminal output about the suspension : [1] + 12526 suspended (tty output) "/Users//Tech Projects/project/tts/videnv/bin/python"

    


  • how to resolve pixel jittering in javacv generated video

    16 septembre 2024, par Vikram

    I am trying to generate video using images, where image have some overlays text and png icons. I am using javacv library for this.
Final output video seems pixelated, i don't understand what is it since i do not have video processing domain knowledge, i am beginner to this.
I know that video bitrate and choice of video encoder are important factor which contributes to video quality and there are many more factors too.

    


    I am providing you two output for comparison, one of them is generated using javacv and another one is from moviepy library

    


    Please watch it in full screen since the problem i am talking about only gets highlighted in full screen, you will see the pixel dancing in javacv generated video, but python output seems stable

    


    https://imgur.com/a/aowNnKg - javacv generated

    


    https://imgur.com/a/eiLXrbk - Moviepy generated

    


    I am using same encoder in both of the implementation

    


    Encoder - libx264
bitrate - 
   800 Kbps for javacv 
   500 Kbps for moviepy

frame rate - 24fps for both of them

output video size -> 
    7MB (javacv)
    5MB (Moviepy)




    


    generated output size from javacv is bigger then moviepy generated video.

    


    here is my java configuration for FFmpegFrameRecorder

    


            FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(this.outputPath, 
                                           this.screensizeX, this.screensizeY);
        if(this.videoCodecName!=null && "libx264".equals(this.videoCodecName)) {
            recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
        }
        recorder.setFormat("mp4"); 
        recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420);
        recorder.setVideoBitrate(800000);
        recorder.setImageWidth(this.screensizeX);
        recorder.setFrameRate(24);



    


    and here is python configuration for writing video file

    


    Full_final_clip.write_videofile(
                            f"{video_folder_path}/{FILE_ID}_INTERMEDIATE.mp4",
                            codec="libx264",
                            audio_codec="aac",
                            temp_audiofile=f"{FILE_ID}_INTER_temp-audio.m4a",
                            remove_temp=True,
                            fps=24,
                        )



    


    as you can see i am not specifying bitrate in python, but i checked that bitrate of final output is around 500 kbps, which is lower then what i specified in java, yet java generated video quality seems poor.

    


    I have tried setting crf value also , but it seems it does not have any impact when used.

    


    increasing bitrate improve quality somewhat but at the cost of file size, still generated output seems pixelated.

    


    Can someone please highlight what might be the issue, and how python is generating better quality video, when both of the libraries use ffmpeg at the backend.

    


    Edit 1 : also, I am adding code which is being used to make zoom animation for continuous frames, As somewhere i read that this might be the cause for pixel jitter, please see and let me know if there is any improvement we can do to remove pixel jittering

    


    private Mat applyZoomEffect(Mat frame, int currentFrame, long effectFrames, int width, int height, String mode, String position, double speed) {
        long totalFrames = effectFrames;
        double i = currentFrame;
        if ("out".equals(mode)) {
            i = totalFrames - i;
        }
        double zoom = 1 + (i * ((0.1 * speed) / totalFrames));

        double originalCenterX = width/2.0;
        double originalCenterY = height/2.0;
   

        // Resize image
        //opencv_imgproc.resize(frame, resizedMat, new Size(newWidth, newHeight));

        // Determine crop region based on position
        double x = 0, y = 0;
        switch (position.toLowerCase()) {
            case "center":
                // Adjusting for center zoom
                 x = originalCenterX - originalCenterX * zoom;
                 y = originalCenterY - originalCenterY * zoom;
                 
                 x= (width-(width*zoom))/2.0;
                 y= (height-(height*zoom))/2.0;
                break;
        }

        double[][] rowData = {{zoom, 0, x},{0,zoom,y}};

        double[] flatData = flattenArray(rowData);

        // Create a DoublePointer from the flattened array
        DoublePointer doublePointer = new DoublePointer(flatData);

        // Create a Mat object with two rows and three columns
        Mat mat = new Mat(2, 3, org.bytedeco.opencv.global.opencv_core.CV_64F); // CV_64F is for double type

        // Put the data into the Mat object
        mat.data().put(doublePointer);
        Mat transformedFrame = new Mat();
        opencv_imgproc.warpAffine(frame, transformedFrame, mat, new Size(frame.cols(), frame.rows()),opencv_imgproc.INTER_LANCZOS4,0,new Scalar(0,0,0,0));
        return transformedFrame;
    }