Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (42)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (8815)

  • Having problem in video stegano, the hiden message always lost

    5 avril 2022, par user7025125

    friend, i am currently looking for a way to video stegano. I successfully in splitting frames from video file and hide messages inside them. But when i combine these frames into video and trying to extract info from the hiden video, i always failed. I guess here is problem with video compression.

    


    Here is my code.

    


    from stegano import lsb
from os.path import isfile, join

import time  # install time ,opencv,numpy modules
import cv2
import numpy as np
import math
import os
import shutil
from moviepy.editor import *
from subprocess import call, STDOUT


def split_string(s_str, count=10):
    per_c = math.ceil(len(s_str)/count)
    c_cout = 0
    out_str = ''
    split_list = []
    for s in s_str:
        out_str += s
        c_cout += 1
        if c_cout == per_c:
            split_list.append(out_str)
            out_str = ''
            c_cout = 0
    if c_cout != 0:
        split_list.append(out_str)
    return split_list


def frame_extraction(video):
    if not os.path.exists("./tmp"):
        os.makedirs("tmp")    
    temp_folder = "./tmp"
    print("[INFO] tmp directory is created")

    vidcap = cv2.VideoCapture(video)
    count = 0

    while True:
        success, image = vidcap.read()
        if not success:
            break
        cv2.imwrite(os.path.join(temp_folder, "{:d}.png".format(count)), image)
        count += 1
        print("[INFO] frame {} is extracted".format(count))


def encode_string(input_string, root="./tmp/"):
    split_string_list = split_string(input_string)
    for i in range(0, len(split_string_list)):
        f_name = "{}{}.png".format(root, i)
        secret_enc = lsb.hide(f_name, split_string_list[i])
        secret_enc.save(f_name)
        print("[INFO] frame {} holds {}".format(f_name, lsb.reveal(f_name)))


def decode_string(video):
    frame_extraction(video)
    secret = []
    root = "./tmp/"
    for i in range(len(os.listdir(root))):
        f_name = "{}{}.png".format(root, i)
        print("[INFO] frame {} is decoding".format(f_name))
        secret_dec = lsb.reveal(f_name)
        if secret_dec == None:
            break
        secret.append(secret_dec)
    print("[INFO] secret is {}".format("".join(secret)))
    print(''.join([i for i in secret]))
    # clean_tmp()


def clean_tmp(path="./tmp"):
    if os.path.exists(path):
        shutil.rmtree(path)
        print("[INFO] tmp files are cleaned up")


def main():
    input_string = input("Enter the input string: ")
    f_name = input("enter the name of video: ")

    # 从源文件分离出帧
    frame_extraction(f_name)
    
    # 分离文件路径和扩展名
    file_path, file_extraction = os.path.splitext(f_name)
    
    # 创建输出音频文件
    audio_path = file_path + "_temp.mp3"
    video = VideoFileClip(f_name)
    video.audio.write_audiofile(audio_path)

    # 加密字符
    encode_string(input_string)

    # 从tmp文件夹的图片创建没有声音的视频
    fps=30
    img_root = r"./tmp/"
    # fourcc = cv2.VideoWriter_fourcc(*'mp4v')
    fourcc = cv2.VideoWriter_fourcc(*'XVID')
    video_file_path = file_path + "_temp.avi"
    # 获取tmp文件夹第一张视频的尺寸
    img = cv2.imread(img_root + "0.png")
    height, width, layers = img.shape
    size=(width,height)
    videoWriter = cv2.VideoWriter(video_file_path,fourcc=fourcc,fps=fps,frameSize=size)
    for i in range(len(os.listdir(img_root))):
        frame = cv2.imread(img_root+str(i)+'.png')
        videoWriter.write(frame)
    videoWriter.release()

    # 合并视频和音频     audio_path   video_file_path
    video = VideoFileClip(video_file_path)
    audio_clip = AudioFileClip(audio_path)
    video = video.set_audio(audio_clip)
    video.write_videofile(file_path + "_hide.avi")
    clean_tmp()


if __name__ == "__main__":
    while True:
        print("1.Hide a message in video 2.Reveal the secret from video")
        print("any other value to exit")
        choice = input()
        if choice == '1':
            main()
        elif choice == '2':
            decode_string(input("enter the name of video with extension: "))
        else:
            break



    


    I have tried mp4, avi, wov format. But none of them worked.

    


    IF YOU HAVE ANY IDEA OR SUGGESTION GIVEN TO ME, I WOULD BE VERY GRATEFUL

    


  • Is there any open source solution to display a remote stream inside a Hololens2 UWP Vuforia application ?

    19 avril 2023, par T777

    What do we need ?

    


    We are trying to develop an application for quality management in which we show an hologram on a metal part as an assitance marking. (using Hololen2 + Vuforia + ModleTargets) The employee uses an sensor to follow this assitance marking and the data will be analyzed live by a test device. The results are outputed on a screen / are visible at an closed source application of the manufacturer of the test device.

    


    Capturing of the video output :
The current plan is to capture the video stream of the test device via capture card. Add a via mrtk2 videopanel inside the vuforia app and stream the captured video to the Hololens2 using obs or an OpenCV python script for screen recording.

    


    What we have tried so far

    


    1) Sending Raw udp stream
via RMTP and decoding + converting with gstreamer server and writing an own library in Unity for Receiving
Result : Temporary stopped, because receiving the udp streams needs connection/ session management (signalling) frame syncing and agreement on video size, color format, frame rate etc.. and we have no solution.
An own implementation of any of this would have high complexity is consuming a lot of time.

    


    2) Using available protocols that i can find on the web
Actually there are some protocols already developed for session creation and streaming :

    


      

    • HTTP streaming (HLS) (Transport + Session)
    • 


    • RTMP (Transport + Session),
    • 


    • RTP (Transport) + RTPS (Session),
    • 


    • WebRTC : Is possible with different protocol stacks
RTMP/TCP/UDP (Transport) + SDP (standardized format for video paramaters) + ICE (p2p)/ WHIP (http, client-server) / Websocket(client-server) (signaling protocols) that can be used and some good open source streaming servers (gstreamer, mediamtx and srs)
    • 


    


    When using these the video will be encoded typcially with xh264 and need to be decoded on the HoloLens 2. There are APIs to C/C++ native (hardware) decoding libraries like unity-vlc and ffmpeg.NET that needing media library ffmpeg. I could figure out (not tested) that there is an hardware h264 decoder on the HoloLens2 but I have no clue how to access it. Since there I couldnt disvocer any information about HoloLens2 media libraries.

    


    3) Using Unity packages

    


    


    Will be testing other compile options tomorrow..

    


      

    • Mixed Reality WebRTC (https://github.com/microsoft/MixedReality-WebRTC) :
Various protocol support, Microsoft brought Webrtc specifically to HoloLens.
Deprecated, as fas as I can see just support for Hololens1 and ARM32. So i can not evaluate if trying it with this is worth it.
    • 


    


    What are the next options ?

    


      

    • Developing a raw udp streaming library with untiy directly.
    • 


    • Rebuilding the application with visionlib (ARM32) compatible and MixedRealityWebRTC (ARM32)
    • 


    • Porting ffmpeg + API to UWP ?
    • 


    • Also there seem some affords to make WebRTC in general available to UWP platforms : https://github.com/microsoft/winrtc
    • 


    


    The questions

    


      

    • Does Vuforia support ARM32 ?
    • 


    • How to access hardware decoder of Hololens2 via Unity Code ?
    • 


    


  • Workflow for creating animated hand-drawn videos - encoding difficulties

    8 décembre 2017, par Mircode

    I want to create YouTube videos, kind of in the style of a white-board animation.

    Tldr question : How can I encode into a lossless rgb video format with ffmpeg, including alpha channel ?

    More detailed :
    My current workflow looks like this :

    I draw the slides in Inkscape, I group all paths that are to be drawn in one go (one scene so to say) and store the slide as svg. Then I run a custom python script over that. It animates the slides as described here https://codepen.io/MyXoToD/post/howto-self-drawing-svg-animation. Each frame is exported as svg, converted to png and fed to ffmpeg for making a video from it.

    For every scene (a couple of paths being drawn, there are several scenes per slide) I create an own video file and then I also store a png file that contains the last frame of that video.

    I then use kdenlive to join it all together : A video containing the drawing of the first scene, then a png which holds the last image of the video while I talk about the drawing, then the next animated drawing, then the next still image where I continue talking and so on. I use these intermediate images because freezing the last frame is tedious in kdenlive and I have around 600 scenes. Here I do the editing, adjust the duration of the still images and render the final video.

    The background of the video is a photo of a blackboard which never changes, the strokes are paths with a filter to make it look like chalk.

    So far so good, everything almost works.

    My problem is : Whenever there is a transition between an animation and a still image, it is visible in the final result. I have tried several approaches to make this work but nothing is without flaws.

    My first approach was to encode the animations as mp4 like this :

    p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-crf', '21', '-bf', '2', '-flags', '+cgop', '-pix_fmt', 'yuv420p', '-movflags', 'faststart', '-r', str(fps), videofile], stdin=PIPE)

    which is recommended for YouTube. But then there is a little brightness difference between video and still image.

    Then I tried mov with png codec :

    p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'png', '-r', str(fps), videofile], stdin=PIPE)

    I think this encodes every frame as png in the video. It creates way bigger files since every frame is encoded separately. But it’s ok since I can use transparency for the background and just store the chalk strokes. However, sometimes I want to swipe parts of the chalk on a slide away, which I do by drawing background over it. Which would work if those overlaid, animated background chunks which are stored in the video looked exactly like the underlying png in the background. But it doesn’t. It’s slightly more blurry and I believe the color changes a tiny bit as well. Which I don’t understand since I thought the video just stores a sequence of pngs... Is there some quality setting that I miss here ?

    Then I read about ProRes4444 and tried that :

    p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-c:v', 'prores_ks', '-pix_fmt', 'yuva444p10le', '-alpha_bits', '8', '-profile:v', '4444', '-r', str(fps), videofile], stdin=PIPE)

    and this actually seems to work. However, the animation files become larger than the bunch of png files they contain, probably because this format stores 12 bit per channel. This is not thaat horrible since only intermediate videos grow big, the final result is still ok.

    But ideally there would be a lossless codec which stores in rgb colorspace with 8 bit per channel, 8 bit for alpha and considers only the difference to the previous frame (because all that changes from frame to frame is a tiny bit of chalk drawing). Is there such a thing ? Alternatively, I’d also be ok without transparency but then I have to store the background in every scene. But if only changes are stored from frame to frame within one scene, that should be manageable.

    Or should I make some fundamental changes in my workflow altogether ?

    Sorry that this is rather lengthy, I appreciate any help.

    Cheers !