Recherche avancée

Médias (3)

Mot : - Tags -/plugin

Autres articles (37)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (6608)

  • avfilter/vf_dnn_processing : add a generic filter for image proccessing with dnn networks

    31 octobre 2019, par Guo, Yejun
    avfilter/vf_dnn_processing : add a generic filter for image proccessing with dnn networks
    

    This filter accepts all the dnn networks which do image processing.
    Currently, frame with formats rgb24 and bgr24 are supported. Other
    formats such as gray and YUV will be supported next. The dnn network
    can accept data in float32 or uint8 format. And the dnn network can
    change frame size.

    The following is a python script to halve the value of the first
    channel of the pixel. It demos how to setup and execute dnn model
    with python+tensorflow. It also generates .pb file which will be
    used by ffmpeg.

    import tensorflow as tf
    import numpy as np
    import imageio
    in_img = imageio.imread('in.bmp')
    in_img = in_img.astype(np.float32)/255.0
    in_data = in_img[np.newaxis, :]
    filter_data = np.array([0.5, 0, 0, 0, 1., 0, 0, 0, 1.]).reshape(1,1,3,3).astype(np.float32)
    filter = tf.Variable(filter_data)
    x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
    sess=tf.Session()
    sess.run(tf.global_variables_initializer())
    output = sess.run(y, feed_dict=x : in_data)
    graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
    tf.train.write_graph(graph_def, '.', 'halve_first_channel.pb', as_text=False)
    output = output * 255.0
    output = output.astype(np.uint8)
    imageio.imsave("out.bmp", np.squeeze(output))

    To do the same thing with ffmpeg :
    - generate halve_first_channel.pb with the above script
    - generate halve_first_channel.model with tools/python/convert.py
    - try with following commands
    ./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=native -y out.native.png
    ./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.pb:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=tensorflow -y out.tf.png

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>
    Signed-off-by : Pedro Arthur <bygrandao@gmail.com>

    • [DH] configure
    • [DH] doc/filters.texi
    • [DH] libavfilter/Makefile
    • [DH] libavfilter/allfilters.c
    • [DH] libavfilter/vf_dnn_processing.c
  • FFmpeg Zoom and Rotate Causes Image Cropping Instead of Allowing It to Go Beyond Frame Edges [closed]

    6 octobre 2024, par XVersi

    I am trying to create a zoom-in video with a gentle rotation effect using FFmpeg. My goal is to zoom in on the center of the image while allowing parts of the image to "go beyond" the frame boundaries during the zoom and rotation. However, what I see is that FFmpeg seems to crop the image to fit within the output frame, and instead of keeping the full image intact, it fills the rest of the frame with black sections, effectively trimming parts of my image.

    &#xA;

    Here is the code I'm using in Go to generate the video using FFmpeg :

    &#xA;

    func createZoomInVideoWithRotation(ffmpegPath, imagePath, outputPath string) error {&#xA;cmd := exec.Command(ffmpegPath, "-i", imagePath, "-vf", `[0:v]scale=9000x5000,zoompan=z=&#x27;min(zoom&#x2B;0.002,1.5)&#x27;:x=&#x27;iw/2-(iw/zoom/2)&#x27;:y=&#x27;ih/2-(ih/zoom/2)&#x27;:d=125,rotate=&#x27;PI/800*t&#x27;,trim=duration=20[v1];[v1]scale=1080:1920[v]`, "-c:v", "libx264", "-crf", "18", "-preset", "slow", outputPath)&#xA;err := cmd.Run()&#xA;if err != nil {&#xA;    return fmt.Errorf("error executing ffmpeg command: %w", err)&#xA;}&#xA;&#xA;fmt.Println("Zoom-in and rotation video created successfully!")&#xA;return nil&#xA;}&#xA;&#xA;func main() {&#xA;    ffmpegPath := `C:\Users\username\Downloads\ffmpeg-7.0.2-essentials_build\ffmpeg-7.0.2-essentials_build\bin\ffmpeg.exe`&#xA;    imagePath := `C:\Users\username\video_proj\image.jpg` &#xA;    outputPath := `C:\Users\username\video_proj\output_zoom_rotate.mp4`&#xA;&#xA;&#xA;&#xA;err := createZoomInVideoWithRotation(ffmpegPath, imagePath, outputPath)&#xA;if err != nil {&#xA;    fmt.Println("Error creating zoom and rotate video:", err)&#xA;}&#xA;}&#xA;

    &#xA;

    Removing the final scale=1080:1920 : I removed the scale part at the end of the filter chain to prevent FFmpeg from resizing the video to a fixed size, hoping that this would allow the image to remain at its original size without being cropped to fit the frame.

    &#xA;

    The image would zoom in on its center and rotate, and during this process, parts of the image would be allowed to move beyond the boundaries of the video output frame.&#xA;There would be no cropping or resizing of the image, meaning the full original image would be intact even if it extends beyond the video frame.&#xA;Essentially, I wanted the image to "overflow" outside of the set dimensions during the rotation, without being forced to fit within the output frame and without adding black borders that indicate missing parts of the image.

    &#xA;

  • How Do I Get Python To Capture My Screen At The Right Frame Rate

    14 juillet 2024, par John Thesaurus

    I have this python script that is supposed to record my screen, on mac os.

    &#xA;

    import cv2&#xA;import numpy as np&#xA;from PIL import ImageGrab&#xA;import subprocess&#xA;import time&#xA;&#xA;def record_screen():&#xA;    # Define the screen resolution&#xA;    screen_width, screen_height = 1440, 900  # Adjust this to match your screen resolution&#xA;    fps = 30  # Target FPS for recording&#xA;&#xA;    # Define the ffmpeg command&#xA;    ffmpeg_cmd = [&#xA;        &#x27;ffmpeg&#x27;,&#xA;        &#x27;-y&#x27;,  # Overwrite output file if it exists&#xA;        &#x27;-f&#x27;, &#x27;rawvideo&#x27;,&#xA;        &#x27;-vcodec&#x27;, &#x27;rawvideo&#x27;,&#xA;        &#x27;-pix_fmt&#x27;, &#x27;bgr24&#x27;,&#xA;        &#x27;-s&#x27;, f&#x27;{screen_width}x{screen_height}&#x27;,  # Size of one frame&#xA;        &#x27;-r&#x27;, str(fps),  # Input frames per second&#xA;        &#x27;-i&#x27;, &#x27;-&#x27;,  # Input from pipe&#xA;        &#x27;-an&#x27;,  # No audio&#xA;        &#x27;-vcodec&#x27;, &#x27;libx264&#x27;,&#xA;        &#x27;-pix_fmt&#x27;, &#x27;yuv420p&#x27;,&#xA;        &#x27;-crf&#x27;, &#x27;18&#x27;,  # Higher quality&#xA;        &#x27;-preset&#x27;, &#x27;medium&#x27;,  # Encoding speed&#xA;        &#x27;screen_recording.mp4&#x27;&#xA;    ]&#xA;&#xA;    # Start the ffmpeg process&#xA;    ffmpeg_process = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)&#xA;&#xA;    frame_count = 0&#xA;    start_time = time.time()&#xA;&#xA;    while True:&#xA;        # Capture the screen&#xA;        img = ImageGrab.grab()&#xA;        img_np = np.array(img)&#xA;&#xA;        # Convert and resize the frame&#xA;        frame = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR)&#xA;        resized_frame = cv2.resize(frame, (screen_width, screen_height))&#xA;&#xA;        # Write the frame to ffmpeg&#xA;        ffmpeg_process.stdin.write(resized_frame.tobytes())&#xA;&#xA;        # Display the frame&#xA;        cv2.imshow(&#x27;Screen Recording&#x27;, resized_frame)&#xA;&#xA;        # Stop recording when &#x27;q&#x27; is pressed&#xA;        if cv2.waitKey(1) &amp; 0xFF == ord(&#x27;q&#x27;):&#xA;            break&#xA;&#xA;    # Close the ffmpeg process&#xA;    ffmpeg_process.stdin.close()&#xA;    ffmpeg_process.wait()&#xA;&#xA;    # Release everything when job is finished&#xA;    cv2.destroyAllWindows()&#xA;&#xA;if __name__ == "__main__":&#xA;    record_screen()&#xA;&#xA;&#xA;

    &#xA;

    As you can see, it should be 30 frames per second, but the problem is that when I open the file afterwards its all sped up. I think it has to do with the frame capture rate as oppose to the encoded rate. I'm not quite sure though. If I try to speed the video down afterwards so that it plays in real time the video is just really choppy. And the higher I make the fps, the faster the video plays, meaning the more I have to slow it down and then its still choppy. I'm pretty sure that it captures frames at a really slow rate and then puts them in a video and plays it back at 30fps. Can anyone fix this ? Anything that gets a working screen recorder on mac os I will take.

    &#xA;