Recherche avancée

Médias (1)

Mot : - Tags -/punk

Autres articles (101)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

Sur d’autres sites (11830)

  • Announcing the world’s fastest VP8 decoder : ffvp8

    24 juillet 2010, par Dark Shikari — ffmpeg, google, speed, VP8

    Back when I originally reviewed VP8, I noted that the official decoder, libvpx, was rather slow. While there was no particular reason that it should be much faster than a good H.264 decoder, it shouldn’t have been that much slower either ! So, I set out with Ronald Bultje and David Conrad to make a better one in FFmpeg. This one would be community-developed and free from the beginning, rather than the proprietary code-dump that was libvpx. A few weeks ago the decoder was complete enough to be bit-exact with libvpx, making it the first independent free implementation of a VP8 decoder. Now, with the first round of optimizations complete, it should be ready for primetime. I’ll go into some detail about the development process, but first, let’s get to the real meat of this post : the benchmarks.

    We tested on two 1080p clips : Parkjoy, a live-action 1080p clip, and the Sintel trailer, a CGI 1080p clip. Testing was done using “time ffmpeg -vcodec libvpx or vp8 -i input -vsync 0 -an -f null -”. We all used the latest SVN FFmpeg at the time of this posting ; the last revision optimizing the VP8 decoder was r24471.

    Parkjoy graphSintel graph

    As these benchmarks show, ffvp8 is clearly much faster than libvpx, particularly on 64-bit. It’s even faster by a large margin on Atom, despite the fact that we haven’t even begun optimizing for it. In many cases, ffvp8′s extra speed can make the difference between a video that plays and one that doesn’t, especially in modern browsers with software compositing engines taking up a lot of CPU time. Want to get faster playback of VP8 videos ? The next versions of FFmpeg-based players, like VLC, will include ffvp8. Want to get faster playback of WebM in your browser ? Lobby your browser developers to use ffvp8 instead of libvpx. I expect Chrome to switch first, as they already use libavcodec for most of their playback system.

    Keep in mind ffvp8 is not “done” — we will continue to improve it and make it faster. We still have a number of optimizations in the pipeline that aren’t committed yet.

    Developing ffvp8

    The initial challenge, primarily pioneered by David and Ronald, was constructing the core decoder and making it bit-exact to libvpx. This was rather challenging, especially given the lack of a real spec. Many parts of the spec were outright misleading and contradicted libvpx itself. It didn’t help that the suite of official conformance tests didn’t even cover all the features used by the official encoder ! We’ve already started adding our own conformance tests to deal with this. But I’ve complained enough in past posts about the lack of a spec ; let’s get onto the gritty details.

    The next step was adding SIMD assembly for all of the important DSP functions. VP8′s motion compensation and deblocking filter are by far the most CPU-intensive parts, much the same as in H.264. Unlike H.264, the deblocking filter relies on a lot of internal saturation steps, which are free in SIMD but costly in a normal C implementation, making the plain C code even slower. Of course, none of this is a particularly large problem ; any sane video decoder has all this stuff in SIMD.

    I tutored Ronald in x86 SIMD and wrote most of the motion compensation, intra prediction, and some inverse transforms. Ronald wrote the rest of the inverse transforms and a bit of the motion compensation. He also did the most difficult part : the deblocking filter. Deblocking filters are always a bit difficult because every one is different. Motion compensation, by comparison, is usually very similar regardless of video format ; a 6-tap filter is a 6-tap filter, and most of the variation going on is just the choice of numbers to multiply by.

    The biggest challenge in an SIMD deblocking filter is to avoid unpacking, that is, going from 8-bit to 16-bit. Many operations in deblocking filters would naively appear to require more than 8-bit precision. A simple example in the case of x86 is abs(a-b), where a and b are 8-bit unsigned integers. The result of “a-b” requires a 9-bit signed integer (it can be anywhere from -255 to 255), so it can’t fit in 8-bit. But this is quite possible to do without unpacking : (satsub(a,b) | satsub(b,a)), where “satsub” performs a saturating subtract on the two values. If the value is positive, it yields the result ; if the value is negative, it yields zero. Oring the two together yields the desired result. This requires 4 ops on x86 ; unpacking would probably require at least 10, including the unpack and pack steps.

    After the SIMD came optimizing the C code, which still took a significant portion of the total runtime. One of my biggest optimizations was adding aggressive “smart” prefetching to reduce cache misses. ffvp8 prefetches the reference frames (PREVIOUS, GOLDEN, and ALTREF)… but only the ones which have been used reasonably often this frame. This lets us prefetch everything we need without prefetching things that we probably won’t use. libvpx very often encodes frames that almost never (but not quite never) use GOLDEN or ALTREF, so this optimization greatly reduces time spent prefetching in a lot of real videos. There are of course countless other optimizations we made that are too long to list here as well, such as David’s entropy decoder optimizations. I’d also like to thank Eli Friedman for his invaluable help in benchmarking a lot of these changes.

    What next ? Altivec (PPC) assembly is almost nonexistent, with the only functions being David’s motion compensation code. NEON (ARM) is completely nonexistent : we’ll need that to be fast on mobile devices as well. Of course, all this will come in due time — and as always — patches welcome !

    Appendix : the raw numbers

    Here’s the raw numbers (in fps) for the graphs at the start of this post, with standard error values :

    Core i7 620QM (1.6Ghz), Windows 7, 32-bit :
    Parkjoy ffvp8 : 44.58 0.44
    Parkjoy libvpx : 33.06 0.23
    Sintel ffvp8 : 74.26 1.18
    Sintel libvpx : 56.11 0.96

    Core i5 520M (2.4Ghz), Linux, 64-bit :
    Parkjoy ffvp8 : 68.29 0.06
    Parkjoy libvpx : 41.06 0.04
    Sintel ffvp8 : 112.38 0.37
    Sintel libvpx : 69.64 0.09

    Core 2 T9300 (2.5Ghz), Mac OS X 10.6.4, 64-bit :
    Parkjoy ffvp8 : 54.09 0.02
    Parkjoy libvpx : 33.68 0.01
    Sintel ffvp8 : 87.54 0.03
    Sintel libvpx : 52.74 0.04

    Core Duo (2Ghz), Mac OS X 10.6.4, 32-bit :
    Parkjoy ffvp8 : 21.31 0.02
    Parkjoy libvpx : 17.96 0.00
    Sintel ffvp8 : 41.24 0.01
    Sintel libvpx : 29.65 0.02

    Atom N270 (1.6Ghz), Linux, 32-bit :
    Parkjoy ffvp8 : 15.29 0.01
    Parkjoy libvpx : 12.46 0.01
    Sintel ffvp8 : 26.87 0.05
    Sintel libvpx : 20.41 0.02

  • Could anyone help me understand why moviepy is rendering at 2.5 it/s ?

    23 décembre 2023, par tristan

    I'm writing a program that uses moviepy to make those weird reddit thread videos with mc parkour playing in the background (real original ik), and everything is good except for when im rendering video which seems to consume a ton of memory and moves really... really slow, like 2.5 it/s. could anyone help ? also im a novice programmer that has no bearing on what is conventional or proper, so sorry if my code is very bad.

    


    from moviepy.video.fx.all import resize
from moviepy.video.tools.subtitles import SubtitlesClip
from moviepy.editor import (
    CompositeVideoClip,
    AudioFileClip,
    VideoFileClip,
    ImageClip,
    TextClip
)
import random
import moviepy.config as cfg
import librosa
from imagegenerator import draw_title
from audioeditor import concatenate_audios
import soundfile as sf
import numpy as np

# Constants
VIDEO_FADE_DURATION = 0.4
SPEED_FACTOR = 1.1
TEXT_WIDTH = 600
MINIMUM_FONT_SIZE = 60
FONT_COLOR = "white"
OUTLINE_COLOR = "black"
TITLE_ANIMATION_DURATION = 0.25
ANIMATION_DURATION = 0.2

# Configure imagemagick binary
cfg.change_settings(
    {
        "IMAGEMAGICK_BINARY": "magick/magick.exe"
    }
)

# Ease-out function
def ease_out(t):
    return 1 - (1 - t) ** 2

# Overlap audio files
def overlap_audio_files(audio_path1, audio_path2):
    # Load the first audio file
    audio1, sr1 = librosa.load(audio_path1, sr=None)

    # Load the second audio file
    audio2, sr2 = librosa.load(audio_path2, sr=None)

    # Ensure both audio files have the same sample rate
    if sr1 != sr2:
        raise ValueError("Sample rates of the two audio files must be the same.")

    # Calculate the duration of audio2
    audio2_duration = len(audio2)

    # Tile audio1 to match the duration of audio2
    audio1 = np.tile(audio1, int(np.ceil(audio2_duration / len(audio1))))

    # Trim audio1 to match the duration of audio2
    audio1 = audio1[:audio2_duration]

    # Combine the audio files by superimposing them
    combined_audio = audio1 + audio2

    # Save the combined audio to a new file
    output_path = "temp/ttsclips/combined_audio.wav"
    sf.write(output_path, combined_audio, sr1)

    return output_path

# Generator function for subtitles with centered alignment and outline
def centered_text_generator_white(txt):
    return TextClip(
        txt,
        font=r"fonts/Invisible-ExtraBold.otf",
        fontsize=86,
        color=FONT_COLOR,
        bg_color='transparent',  # Use a transparent background
        align='center',  # Center the text
        size=(1072, 1682),
        method='caption',  # Draw a caption instead of a title
    )

# Generator function for subtitles with centered alignment and blurred outline
def centered_text_generator_black_blurred_outline(txt, blur_factor=3):
    outline_clip = TextClip(
        txt,
        font=r"fonts/Invisible-ExtraBold.otf",
        fontsize=86,
        color=OUTLINE_COLOR,
        bg_color='transparent',  # Use a transparent background
        align='center',  # Center the text
        size=(1080, 1688),
        method='caption',  # Draw a caption instead of a title
    )

    # Blur the black text (outline)
    blurred_outline_clip = outline_clip.fx(resize, 1.0 / blur_factor)
    blurred_outline_clip = blurred_outline_clip.fx(resize, blur_factor)

    return blurred_outline_clip

# Compile video function
def compile_video(title_content, upvotes, comments, tone, subreddit, video_num):
    # Set the dimensions of the video (720x1280 in this case)
    height = 1280

    # Concatenate the audios
    concatenate_audios()

    concatenated_audio_path = r"temp/ttsclips/concatenated_audio.mp3"
    title_audio_path = r"temp/ttsclips/title.mp3"

    title_audio = AudioFileClip(title_audio_path)
    concatenated_audio = AudioFileClip(concatenated_audio_path)

    # Calculate for video duration
    title_duration = title_audio.duration
    duration = concatenated_audio.duration

    # Set background
    background_path = f"saved_videos/newmcparkour.mp4"
    background = VideoFileClip(background_path)
    background_duration = background.duration
    random_start = random.uniform(0, background_duration - duration)
    background = background.subclip(random_start, random_start + duration)

    # Apply fade-out effect to both background clips
    background = background.crossfadeout(VIDEO_FADE_DURATION)

    # Generate the background image with rounded corners
    background_image_path = draw_title(title_content, upvotes, comments, subreddit)

    # Load the background image with rounded corners
    background_image = ImageClip(background_image_path)

    # Set the start of the animated title clip
    animated_background_clip = background_image.set_start(0)

    # Set the initial position of the text at the bottom of the screen
    initial_position = (90, height)

    # Calculate the final position of the text at the center of the screen
    final_position = [90, 630]

    # Animate the title clip to slide up over the course of the animation duration
    animated_background_clip = animated_background_clip.set_position(
        lambda t: (
            initial_position[0],
            initial_position[1]
            - (initial_position[1] - final_position[1])
            * ease_out(t / TITLE_ANIMATION_DURATION),
        )
    )

    # Set the duration of the animated title clip
    animated_background_clip = animated_background_clip.set_duration(
        TITLE_ANIMATION_DURATION
    )

    # Assign start times to title image
    stationary_background_clip = background_image.set_start(TITLE_ANIMATION_DURATION)

    # Assign positions to stationary title image
    stationary_background_clip = stationary_background_clip.set_position(final_position)

    # Assign durations to stationary title image
    stationary_background_clip = stationary_background_clip.set_duration(
        title_duration - TITLE_ANIMATION_DURATION
    )

    #  Select background music
    if tone == "normal":
        music_options = [
            "Anguish",
            "Garden",
            "Limerence",
            "Lost",
            "NoWayOut",
            "Summer",
            "Never",
            "Miss",
            "Touch",
            "Stellar"
        ]
    elif tone == "eerie":
        music_options = [
            "Creepy",
            "Scary",
            "Spooky",
            "Space",
            "Suspense"
        ]
    background_music_choice = random.choice(music_options)
    background_music_path = f"music/eeriemusic/{background_music_choice}.mp3"

    # Create final audio by overlapping background music and concatenated audio
    final_audio = AudioFileClip(
        overlap_audio_files(background_music_path, concatenated_audio_path)
    )

    # Release the concatenated audio
    concatenated_audio.close()

    # Create subtitles clip using the centered_text_generator
    subtitles = SubtitlesClip("temp/ttsclips/content_speechmarks.srt",
                            lambda txt: centered_text_generator_white(txt))
    subtitles_outline = SubtitlesClip("temp/ttsclips/content_speechmarks.srt",
                            lambda txt: centered_text_generator_black_blurred_outline(txt))

    # Overlay subtitles on the blurred background
    final_clip = CompositeVideoClip(
        [background, animated_background_clip, stationary_background_clip, subtitles_outline, subtitles]
    )

    # Set the final video dimensions and export the video
    final_clip = final_clip.set_duration(duration)
    final_clip = final_clip.set_audio(final_audio)

    final_clip.write_videofile(
        f"temp/videos/{video_num}.mp4",
        codec="libx264",
        fps=60,
        bitrate="8000k",
        audio_codec="aac",
        audio_bitrate="192k",
        preset="ultrafast",
        threads=8
    )

    # Release the concatenated audio
    concatenated_audio.close()

    # Release the title audio
    title_audio.close()

    # Release the background video and image
    background.close()
    background_image.close()

    # Release the final audio
    final_audio.close()

    # Release the subtitle clips
    subtitles.close()
    subtitles_outline.close()

    # Release the final video clip
    final_clip.close()


    


    ive tried turning down my settings, like setting it to "ultrafast" and dropping the bitrate, but nothing seems to work. the only thing I can think of now is that there is something Im doing wrong with moviepy.

    


  • What is Google Analytics data sampling and what’s so bad about it ?

    16 août 2019, par Joselyn Khor — Analytics Tips, Development

    What is Google Analytics data sampling, and what’s so bad about it ?

    Google (2019) explains what data sampling is :

    “In data analysis, sampling is the practice of analysing a subset of all data in order to uncover the meaningful information in the larger data set.”[1]

    This is basically saying instead of analysing all of the data, there’s a threshold on how much data is analysed and any data after that will be an assumption based on patterns.

    Google’s (2019) data sampling thresholds :

    Ad-hoc queries of your data are subject to the following general thresholds for sampling :
    [Google] Analytics Standard : 500k sessions at the property level for the date range you are using
    [Google] Analytics 360 : 100M sessions at the view level for the date range you are using (para. 3) [2]

    This threshold is limiting because your data in GA may become more inaccurate as the traffic to your website increases.

    Say you’re looking through all your traffic data from the last year and find you have 5 million page views. Only 500K of that 5 million is accurate ! The data for the remaining 4.5 million (90%) is an assumption based on the 500K sample size.

    This is a key weapon Google uses to sell to large businesses. In order to increase that threshold for more accurate reporting, upgrading to premium Google Analytics 360 for approximately US$150,000 per year seems to be the only choice.

    What’s so bad about data sampling ?

    It’s unfair to say sampled data is to be disregarded completely. There is a calculation ensuring it is representative and can allow you to get good enough insights. However, we don’t encourage it as we don’t just want “good enough” data. We want the actual facts.

    In a recent survey sent to Matomo customers, we found a large proportion of users switched from GA to Matomo due to the data sampling issue.

    The two reasons why data sampling isn’t preferable : 

    1. If the selected sample size is too small, you won’t get a good representative of all the data. 
    2. The bigger your website grows, the more inaccurate your reports will become.

    An example of why we don’t fully trust sampled data is, say you have an ecommerce store and see your GA revenue reports aren’t matching the actual sales data, due to data sampling. In GA you may be seeing revenue for the month as $1 million, instead of actual sales of $800K.

    The sampling here has caused an inaccuracy that could have negative financial implications. What you get in the GA report is an estimated dollar figure rather than the actual sales. Making decisions based on inaccurate data can be costly in this case. 

    Another disadvantage to sampled data is that you might be missing out on opportunities you would’ve noticed if you were given a view of the whole. E.g. not being able to see real patterns occurring due to the data already being predicted. 

    By not getting a chance to see things as they are and only being able to jump to the conclusions and assumptions made by GA is risky. The bigger your business grows, the less you can risk making business decisions based on assumptions that could be inaccurate. 

    If you feel you could be missing out on opportunities because your GA data is sampled data, get 100% accurately reported data. 

    The benefits of 100% accurate data

    Matomo doesn’t use data sampling on any of our products or plans. You get to see all of your data and not a sampled data set.

    Data quality is necessary for high impact decision-making. It’s hard to make strategic changes if you don’t have confidence that your data is reliable and accurate.

    Learn about how Matomo is a serious contender to Google Analytics 360. 

    Now you can import your Google Analytics data directly into your Matomo

    If you’re wanting to make the switch to Matomo but worried about losing all your historic Google Analytics data, you can now import this directly into your Matomo with the Google Analytics Importer tool.


    Take the challenge !

    Compare your Google Analytics data (sampled data) against your Matomo data, or if you don’t have Matomo data yet, sign up to our 30-day free trial and start tracking !

    References :

    [1 & 2] About data sampling. (2019). In Analytics Help About data sampling. Retrieved August 14, 2019, from https://support.google.com/analytics/answer/2637192