Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (14)

  • Les images

    15 mai 2013
  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (6951)

  • How do I write audio and video to the same file using FFMPEG and C ?

    29 juin 2018, par benwiz

    I am consuming an audio file and a video file using ffmpeg in a C program. I am modifying both the audio and the video data. In working the code below I write both each of these streams to its own file. How can I write both streams to the same file ?

    #include
    #include
    #include

    // Video resolution
    #define W 1280
    #define H 720

    // Allocate a buffer to store one video frame
    unsigned char video_frame[H][W][3] = {0};

    int main()
    {
       // Audio pipes
       FILE *audio_pipein = popen("ffmpeg -i data/daft-punk.mp3 -f s16le -ac 1 -", "r");
       FILE *audio_pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i - out/daft-punk.mp3", "w");

       // Video pipes
       FILE *video_pipein = popen("ffmpeg -i data/daft-punk.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
       FILE *video_pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 out/daft-punk.mp4", "w");

       // Audio vars
       int16_t audio_sample;
       int audio_count;
       int audio_n = 0;

       // Video vars
       int x = 0;
       int y = 0;
       int video_count = 0;

       // Read, modify, and write one audio_sample and video_frame at a time
       while (1)
       {
           // Audio
           audio_count = fread(&audio_sample, 2, 1, audio_pipein); // read one 2-byte audio_sample
           if (audio_count == 1)
           {
               ++audio_n;
               audio_sample = audio_sample * sin(audio_n * 5.0 * 2 * M_PI / 44100.0);
               fwrite(&audio_sample, 2, 1, audio_pipeout);
           }

           // Video
           video_count = fread(video_frame, 1, H * W * 3, video_pipein); // Read a frame from the input pipe into the buffer
           if (video_count == H * W * 3)                                 // Only modify and write if frame exists
           {
               for (y = 0; y < H; ++y)     // Process this frame
                   for (x = 0; x < W; ++x) // Invert each colour component in every pixel
                   {
                       video_frame[y][x][0] = 255 - video_frame[y][x][0]; // red
                       video_frame[y][x][1] = 255 - video_frame[y][x][1]; // green
                       video_frame[y][x][2] = 255 - video_frame[y][x][2]; // blue
                   }
               fwrite(video_frame, 1, H * W * 3, video_pipeout); // Write this frame to the output pipe
           }

           // Break if both complete
           if (audio_count != 1 && video_count != H * W * 3)
               break;
       }

       // Close audio pipes
       pclose(audio_pipein);
       pclose(audio_pipeout);

       // Close video pipes
       fflush(video_pipein);
       fflush(video_pipeout);
       pclose(video_pipein);
       pclose(video_pipeout);

       return 0;
    }

    I took the base for this code from this article.

    Thanks !

  • Basic "pass-through" use of FFmpegReader/FFmpegWriter in scikit-video

    6 février 2021, par JonathanZ supports MonicaC

    I am starting to use scikit-video and am having trouble writing files. I have reduced the problem to the simplest possible example

    


    vid_file = "6710185719062326259_stamp_25pct.mp4"
output_file = "out_temp3.mp4"
reader = skvideo.io.FFmpegReader(vid_file)
writer = skvideo.io.FFmpegWriter(output_file)
for frame in reader.nextFrame():
        writer.writeFrame(frame)
writer.close()


    


    I'm playing the files in VLC, and the vid_file is valid but the output file, though playable, is mostly big green blocks (though I can discern some details from the original video in it).

    


    My goal, or course, is to do "interesting" manipulations of the frame before I write it out, but I need to get the "no modifications" version working correctly first. I'm also going to be using this on large files, so the vread/vwrite functions that process an entire file at once are not appropriate.

    


    I'm guessing I need to set the appropriate values in the outputdict parameter for the FFmpegWriter, but there are so many that I don't know where to start. I have tried

    


    writer = skvideo.io.FFmpegWriter(output_file, outputdict={'-crf': '0', '-pix_fmt': 'rgb24'})


    


    (-crf 0 to suppress any compression, -pixfmt rgb24 as that's what FFmpegReader says it delivers by default, but these don't work either.

    


    Any ideas on how to make this work ?

    


    Here's the skvideo.io.ffprobe video information for the input file.

    


    {
    "@index": "0",
    "@codec_name": "h264",
    "@codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
    "@profile": "High",
    "@codec_type": "video",
    "@codec_time_base": "1/30",
    "@codec_tag_string": "avc1",
    "@codec_tag": "0x31637661",
    "@width": "480",
    "@height": "270",
    "@coded_width": "480",
    "@coded_height": "272",
    "@has_b_frames": "2",
    "@pix_fmt": "yuv420p",
    "@level": "21",
    "@chroma_location": "left",
    "@refs": "1",
    "@is_avc": "true",
    "@nal_length_size": "4",
    "@r_frame_rate": "15/1",
    "@avg_frame_rate": "15/1",
    "@time_base": "1/15360",
    "@start_pts": "0",
    "@start_time": "0.000000",
    "@duration_ts": "122880",
    "@duration": "8.000000",
    "@bit_rate": "183806",
    "@bits_per_raw_sample": "8",
    "@nb_frames": "120",
    "disposition": {
        "@default": "1",
        "@dub": "0",
        "@original": "0",
        "@comment": "0",
        "@lyrics": "0",
        "@karaoke": "0",
        "@forced": "0",
        "@hearing_impaired": "0",
        "@visual_impaired": "0",
        "@clean_effects": "0",
        "@attached_pic": "0",
        "@timed_thumbnails": "0"
    },
    "tag": [
        {
            "@key": "language",
            "@value": "und"
        },
        {
            "@key": "handler_name",
            "@value": "VideoHandler"
        }
    ]
}


    


    I will mention that when I ffprobe the output file the only differences I see are 1) the timing data is different, which isn't surprising, and 2) the output file has

    


        "@has_b_frames": "0",
    "@pix_fmt": "yuv444p",


    


    I'm pretty confident the reader is working okay, because if I write out the data with

    


    skimage.io.imsave('x.png', frame,  check_contrast=False)


    


    it looks good.

    


  • Remove random background from video using ffmpeg or Python

    20 avril 2024, par Raheel Shahzad

    I want to remove background from a person's video using ffmpeg or Python. If I record a video at any place, detect the person in the video and then remove anything except that person. Not asking for green or single color background as that can be done through chromakey and I am not looking for that.

    



    I've tried this (https://tryolabs.com/blog/2018/04/17/announcing-luminoth-0-1/) approach but it is giving me output of rectangular box. It is informative enough as area to explore is narrow down enough but still need to remove total background.
I've also tried grabcut (https://docs.opencv.org/4.1.0/d8/d83/tutorial_py_grabcut.html) but that need user interaction otherwise result isn't too good.
I've also tried to use ffmpeg and found this example (http://oioiiooixiii.blogspot.com/2016/09/ffmpeg-extract-foreground-moving.html) but it needs still image so I tried to take background picture before recording video with a person but there are many things required to take difference between background image and video frame.

    



    For opencv approach, I've tried this.

    



    img = cv.imread('pic.png')
mask = np.zeros(img.shape[:2], np.uint8)
bgdModel = np.zeros((1, 65), np.float64)
fgdModel = np.zeros((1, 65), np.float64)
rect = (39, 355, 1977, 2638)
cv.grabCut(img, mask, rect, bgdModel, fgdModel, 5, cv.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0), 0, 1).astype('uint8')
img = img*mask2[:, :, np.newaxis]
plt.imshow(img), plt.colorbar(), plt.show()


    



    But it is removing some of person's part too.
Also tried ffmpeg way but not a good result.

    



    ffmpeg -report -y -i "img.jpg" -i "vid.mov" -filter_complex "[1:v]format=yuva444p,lut=c3=128[video2withAlpha],[0:v][video2withAlpha]blend=all_mode=difference[out]" -map "[out]" "output.mp4"


    



    All I need is just a person's image/video take under any normal background without user interaction like area selection or any other thing like that. Luminoth has trained data but that is giving box of person not exact person so that I can remove. Any help or guidance to remove background will be appreciated.