Recherche avancée

Médias (0)

Mot : - Tags -/protocoles

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (31)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

Sur d’autres sites (5872)

  • How to make FFMPEG video grayscale ?

    15 octobre 2018, par Aang

    In my program I am using the subprocess Python module in my script to call on FFMPEG to turn a sequence of images into a video (grayscale). It works and a video is created, but upon further inspection I see that the video itself has encoded the different intensities incorrectly.

    Here is my code :

                   subprocess.call(['/usr/local/bin/ffmpeg', '-framerate', \
                                framerate, '-f', 'image2','-pattern_type', \
                                'glob', '-i', self.directory + '/orbit_*.png', \
                                '-r', '10', '-s', '620x380', '-flags', 'gray', self.directory +
                                ".avi"])

    Here is a link to the video that’s created : https://drive.google.com/open?id=0Bxt1siua2KQma0JaMVBMcE9TOEE

    If you’ll look at the scale bar on the right in the video, which normally looks like this
    in color, you’ll see that the same color shows up twice on the scale bar. I think it’s because FFMPEG is reading colors with the same intensities (for example, yellow and blue) the same way and therefore when the photo is encoded into grayscale it looks like this.

    What can I do ? Is this a matter of changing the "-flags", "gray" parameters of my subprocess call ?

  • how to get exact position in video as image view ?

    16 février 2024, par Dhruvisha Joshi

    I want to give photo editing feature in my app. so I am allowing user to add text to photo and after that I want to convert it to video using Ffmpeg command.here is what my screen looks like where user can edit photos

    


    here is my command that adds text and convert photo to video.
ffmpeg -loop 1 -i /var/mobile/Containers/Data/Application/88F535C3-A300-456C-97BB-1A9B83EAEE7B/Documents/Compress_Picture/input.jpg -filter_complex "[0]scale=1080:trunc(ow/a/2)*2[video0];[video0]drawtext=text='Dyjfyjyrjyfjyfkyfk':fontfile=/private/var/containers/Bundle/Application/DE5C8DAA-4D66-4345-834A-89F8AC19DF9B/Clear Status.app/avenyt.ttf:fontsize=66.55112651646448:fontcolor=#FFFFFF:x=349.92:y=930.051993067591" -c:v libx264 -t 5 -pix_fmt yuv420p -y /var/mobile/Containers/Data/Application/88F535C3-A300-456C-97BB-1A9B83EAEE7B/Documents/Compress_Picture/output0.mp4

    


    here is my swift code to generate command.

    


    var filterComplex = ""
var inputs = ""
var audioIndex = ""

if currentPhotoTextDataArray.contains(where: { $0.isLocation }) {
 // At least one element has isLocation set to true
 // Do something here
 print("There's at least one element with isLocation == true")
                            inputs = "-i \(inputPath) -i \(self.locImagePath)"
                            audioIndex = "2"
                            
                        } else {
                            // No elements have isLocation == true
                            print("No elements have isLocation set to true")
                            inputs = "-i \(inputPath)"
                            audioIndex = "1"
                        }
                        
                        for (index, textData) in currentPhotoTextDataArray.enumerated() {
                            print("x: \(textData.xPosition), y: \(textData.yPosition)")
                            let x = (textData.xPosition) * 1080 / self.photoViewWidth
                            let y = (textData.yPosition) * 1920 / self.photoViewHeight
                            
                            let fontSizeForWidth = (textData.fontSize * 1080) / self.photoViewWidth
                            let fontSizeForHeight = (textData.fontSize * 1920) / self.photoViewHeight
                            print("fontSizeForWidth: \(fontSizeForWidth)")
                            print("fontSizeForHeight: \(fontSizeForHeight)")
                            
                            let fontPath = textData.font.fontPath
                            let fontColor = textData.fontColor.toHexOrASS(format: "hex")
                            let backColor = textData.backColor?.toHexOrASS(format: "hex")
                            print("fontPath: \(fontPath)")
                            print("fontColor: \(fontColor)")
                            
                            let breakedText = self.addBreaks(in: textData.text, with: UIFont(name: textData.font.fontName, size: fontSizeForHeight) ?? UIFont(), forWidth: 1080, padding: Int(x))
                            
                            if textData.isLocation {
                                print("Location is there.")
                                
                                let textFont = UIFont(name: textData.font.fontName, size: fontSizeForHeight)
                                let attributes: [NSAttributedString.Key: Any] = [NSAttributedString.Key.font: textFont ?? UIFont()]
                                let size = (textData.text as NSString).size(withAttributes: attributes)
                                let textWidth = Int(size.width) + 130
                                
                                var endTimeLoc = 0.0
                                if let audioData = self.audioDataArray.first(where: { $0.photoIndex == mainIndex }) {
                                    let duration = audioData.audioEndTime - audioData.audioStartTime
                                    endTimeLoc = duration
                                } else {
                                    endTimeLoc = 5
                                }
                                
                                let layerFilter = "color=color=black@.38:size=\(textWidth)x130[layer0];[video\(index)][layer0]overlay=enable='between(t,0,\(endTimeLoc))':x=\(x):y=(\(y)-(overlay_h/2))[layer1];"
                                filterComplex += layerFilter
                                let imageFilter = "[1:v]scale=80:80[image];[layer1][image]overlay=enable='between(t,0,\(endTimeLoc))':x=\(x)+10:y=(\(y)-(overlay_h/2))[v\(index)];"
                                filterComplex += imageFilter
                                
                                if index == currentPhotoTextDataArray.count - 1 {
                                    let textFilter = "[v\(index)]drawtext=text='\(breakedText)':fontfile=\(fontPath):fontsize=\(fontSizeForHeight):fontcolor=\(fontColor):x=(\(x)+100):y=(\(y)-(text_h/2))"
                                    filterComplex += textFilter
                                } else {
                                    let textFilter = "[v\(index)]drawtext=text='\(breakedText)':fontfile=\(fontPath):fontsize=\(fontSizeForHeight):fontcolor=\(fontColor):x=(\(x)+100):y=(\(y)-(text_h/2))[video\(index + 1)];"
                                    filterComplex += textFilter
                                }
                                
                            } else {
                                
                                let textBack = textData.backColor != nil ? ":box=1:boxcolor=\(backColor ?? "")@0.8:boxborderw=25" : ""
                                
                                if index == currentPhotoTextDataArray.count - 1 {
                                    let textFilter = "[video\(index)]drawtext=text='\(breakedText)':fontfile=\(fontPath):fontsize=\(fontSizeForHeight):fontcolor=\(fontColor):x=\(x):y=\(y)\(textBack)"
                                    filterComplex += textFilter
                                } else {
                                    let textFilter = "[video\(index)]drawtext=text='\(breakedText)':fontfile=\(fontPath):fontsize=\(fontSizeForHeight):fontcolor=\(fontColor):x=\(x):y=\(y)\(textBack)[video\(index + 1)];"
                                    filterComplex += textFilter
                                }
                            }
                            
                        }
                        
                        if let audioData = self.audioDataArray.first(where: { $0.photoIndex == mainIndex }) {
                            
                            let audioSTime = self.getSTimeAudio(index: mainIndex, secondsPhoto: Int(audioData.audioStartTime))
                            let audioETime = self.getETimeAudio(index: mainIndex, secondsPhoto: Int(audioData.audioEndTime))
                            let duration = audioData.audioEndTime - audioData.audioStartTime
                            
                            command = "-loop 1 \(inputs) -ss \(audioSTime) -to \(audioETime) -i \"\(audioData.audioURL.path)\" -filter_complex \"[0]scale=1080:trunc(ow/a/2)*2[video0];\(filterComplex)[final_video]\" -map \"[final_video]\":v -map \(audioIndex):a -c:v libx264 -t \(duration) -pix_fmt yuv420p -y \(outputURL.path)"
                            
                        } else {
                            command = "-loop 1 \(inputs) -filter_complex \"[0]scale=1080:trunc(ow/a/2)*2[video0];\(filterComplex)\" -c:v libx264 -t 5 -pix_fmt yuv420p -y \(outputURL.path)"
                        }
                    }


    


    I am not getting exact position of text in generated video as added by user. if anyone knows please help me with this.

    


  • Why doesn't seem to be able to send an audio file with FRONT_COVER on the Pytelegrambotapi

    27 décembre 2024, par exorik

    In general the problem is that the audio file is sent to a file without a picture. I first thought that the problem is that the picture is installed on the wrong version of id3, and tried four methods of installation

    


      

    1. via the ffmpeg command = [ “ffmpeg”, “-i”, file_path, “-i”, cover_path, “-map”, “0”, “-map”, “1”, “-c:a”, “copy”, “-c:v”, “mjpeg”, “-id3v2_version”, “3“,”-y”, output_file, ]subprocess.run(command, check=True)

      


    2. 


    3. via eyed3, audiofile.tag.images.set(
eyed3.id3.frames.ImageFrame.FRONT_COVER,
cover_data,
“` image/jpeg,
)

      


    4. 


    5. through the mutagen library, tried setting audio.add(
APIC(
encoding=3,
mime=“image/jpeg”,
type=3,
desc=“Cover”,
data=open(saved_photo, “rb”).read(),
)
)
At this stage I realized that the problem is not in the correct id3 tag setting, but in the method through which the audio file is sent. because if I opened it manually and sent it, the cover was there.
But I also tried installing the tag. second version. through the eyed3 library, but that also didn't result in the audio file with the cover art being sent to telegram.

      


    6. 


    


    audiofile w/ out cover

    


    and the exact same audio file, only with the cover art. (it hasn't been altered in any way)

    


    


user_states = {}




def save_audio(message):
    file_info = bot.get_file(message.audio.file_id)
    downloaded_file = bot.download_file(file_info.file_path)

    user_dir = os.path.join("TEMP", str(message.chat.id), "albums")
    os.makedirs(user_dir, exist_ok=True)

    original_file_name = (
        message.audio.file_name
        if message.audio.file_name
        else f"{message.audio.file_id}.mp3"
    )

    file_path = os.path.join(user_dir, original_file_name)

    if message.chat.id not in user_states:
        user_states[message.chat.id] = {"files": [], "stage": None}

    with open(file_path, "wb") as f:
        f.write(downloaded_file)

    user_states[message.chat.id]["files"].append(file_path)

    return file_path


def clear_metadata_for_file(file_path, user_id):
    clear_metadata = settings.get(str(user_id), {}).get("clear", True)

    if clear_metadata:
        temp_file_path = f"{file_path}.temp.mp3"
        command = [
            "ffmpeg",
            "-i",
            file_path,
            "-map_metadata",
            "-1",
            "-c:a",
            "copy",
            "-y",
            temp_file_path,
        ]
        subprocess.run(command, check=True)
        os.replace(temp_file_path, file_path)

        audio_file = eyed3.load(file_path)
        if audio_file.tag is not None:
            audio_file.tag.clear()
        audio_file.tag.save()

    else:
        return file_path


def send_files(message):
    chat_id = message.chat.id
    if chat_id not in user_states or "files" not in user_states[chat_id]:
        return "No files found"

    for file_path in user_states[chat_id]["files"]:
        try:
            with open(file_path, "rb") as f:
                bot.send_audio(chat_id, f)
        except Exception as e:
            return e



@bot.message_handler(content_types=["photo"])
def handle_cover(message):

    if message.content_type == "photo":
        try:

            saved_photo = save_photo(message)


            for file_path in user_states[message.chat.id]["files"]:

                audio = ID3(file_path)
                audio.add(
                    APIC(
                        encoding=3,
                        mime="image/jpeg",
                        type=3,
                        desc="Cover",
                        data=open(saved_photo, "rb").read(),
                    )
                )
                audio.save(file_path)
                with open(saved_photo, "rb") as image_file:
                    image_data = image_file.read()

                audiofile = eyed3.load(file_path)

                if audiofile.tag is None:
                    audiofile.initTag(version=(2, 3, 0))

                audiofile.tag.images.set(
                    3, image_data, "image/jpeg", description="Cover"
                )
                audiofile.tag.save(version=(2, 3, 0))

            send_files(message)

        except Exception as e:
            return

bot.infinity_polling()



    


    I was hoping to get some audio files with cover art, and I'm sure there's no problem with that. But for some reason, the file itself is sent visually without the cover art. I'd really appreciate it if you could help me out with this.