Python lets you create short videos efficiently

1. Extracting background music and modifying volume

Download the video file from the jitter first, then use the moviepy dependency library to extract the background music.

def get_audio_from_video(video_path):
    """
    //Extract audio from video
    :param video:
    :return:
    """
    file_path = './source/' + gene_random() + '.wav'
    video = VideoFileClip(video_path)
    audio = video.audio
    audio.write_audiofile(file_path)
    return file_path

A short video may contain more than one background music and the volume of the background music needs to be adjusted.

def handle_bgm(bgm_path, coefficient):
    music = AudioFileClip(bgm_path)

   #Adjust the volume to a multiple of the original volume
   out_music = music.fx(afx.volumex, coefficient).fx(afx.audio_fadein, 0.5).fx(
        afx.audio_fadeout, 1)

   #Save background music
   out_music.write_audiofile('./source/output.wav')

2. Change Sound

Changing sound is also a practical skill in short video creation, which can be achieved in three ways.

  • Use AU for variations

  • Modify Call to Baidu Cloud API

  • Use the librosa dependency library.

import librosa

source = "./source/source.wav"

#Load background music
y, sr = librosa.load(source)

#Modify
librosa.effects.pitch_shift(y, sr, n_steps=6)
y = shrinkstep(10, y, sr)

#Generate a new background music file
outputpath = "./source/result.wav"
librosa.output.write_wav(outputpath, y, sr)

3. Video Transition

Adding transitions between videos makes video playback smoother. Python uses the following seven steps to achieve video transitions.

  • Separate Video Clip, Audio Clip of 2 Videos

  • Unified Video Resolution

  • Add transitions to the beginning and end of the video, such as fade-in and fade-out effects.

  • Merge 2 videos

  • Merge 2 Audios

  • Set up audio files

  • Save Video File

def transitions_animation(path_video1, path_video2):
    """
    //Two video transit animations (fade in and fade out for example)
    //Note: Ensure consistent frame rate for video capture
    :param video1:
    :param video2:
    :return:
    """
    #Length of time to acquire video
    clip_video1 = VideoFileClip(path_video1)
    duration_video1 = clip_video1.duration

    #Get video audio
    path_audio1 = get_audio_from_video(path_video1)
    path_audio2 = get_audio_from_video(path_video2)

    audio_video1 = AudioFileClip(path_audio1)
    audio_video2 = AudioFileClip(path_audio2)

    clip_video2 = VideoFileClip(path_video2)
    duration_video2 = clip_video2.duration

    print(f'The length of the two videos is:{duration_video1},{duration_video2}')

    #Unified Video Resolution
    w, h, fps = clip_video1.w, clip_video1.h, clip_video1.fps
    clip_video2_new = clip_video2.resize((w, h))

    #Transit duration, default 2s
    transitions_time = 2

    #The first video performs fade out effect
    subVideo1_part1 = clip_video1.subclip(0, duration_video1 - 2)
    subVideo1_part2 = clip_video1.subclip(duration_video1 - 2).fadeout(2, (1, 1, 1))

    #Second video performs fade-in effect
    subVideo2_part1 = clip_video2_new.subclip(0, 3).fadein(3, (1, 1, 1))
    subVideo2_part2 = clip_video2_new.subclip(3)

    #Merge 4 videos
    result_video = concatenate_videoclips([subVideo1_part1, subVideo1_part2, subVideo2_part1, subVideo2_part2])

    #Merge Audio
    result_audio = concatenate_audioclips([audio_video1, audio_video2])

    #Video Settings Audio File
    final_clip = result_video.set_audio(result_audio)
    
    # pass Writes to Video File

4. Slope Variable Speed

In order to make short videos more cool, many late-stage masters will slope the video variable speed operation, making the original dull video much more vivid.

The usual way is to play slowly, then resume normal speed, with a transitional sound effect in the middle.

def change_video_speed(video_path, speed, start, end):
    """
    //Change the speed of the video
    [MoviePy clip Relevant Importance api](https://juejin.im/post/5d1c4318f265da1ba9159912)
    :param video_path:Video Path
    :param speed:speed
    :param start:start time
    :param end:End time
    :return:
    """
    video = VideoFileClip(video_path)

    #Speed transformation
    part1 = video.fl_time(lambda t: speed * t, apply_to=['mask', 'video', 'audio']).set_start(start).set_end(
        end / speed)

    #Remaining Time Recovery Speed
    part2 = video.subclip(end)

    #Composite Video
    result_video = concatenate_videoclips([part1, part2])

    result_path = './source/result.mp4'
    result_video.write_videofile(result_path)

5. Video of ghosts and livestock

The ghost and animal videos come from station B. Many funny video clips on the jitter will use ghost and animal processing, including: picture frame repeat, picture coordinate axis mapping, and so on.

Using Python for ghost and animal videos is also easy.

def ghost_video(video_path, repeat_time, location):
    """
    //Generate ghost and animal videos
    :param video_path: Video Path
    :param repeat_time: Number of repetitions
    :param location: Processing Location
    :return:
    """
    video = VideoFileClip(video_path)

    #The front part of the video
    part1 = video.subclip(0, location)

    #Video Target section, repeat operation
    target = video.subclip(location, location + 1)

    #Post Video
    part2 = video.subclip(location + repeat_time)

    targets = []

    #Join 3 sections
    targets.append(part1)

    for _ in range(repeat_time):
        targets.append(target)

    targets.append(part2)

    #Composite, generate videos
    pass

6. Subtitle Watermark

With ImageClip and TextClip, it is easy to add image watermarks and video subtitles.

#Add Picture Watermark
#Include: Watermark duration, location, transparency, etc.
logo = (ImageClip("./source/logo.png")
            .set_duration(video.duration)  
            .resize(height=50)  
            .margin(right=8, top=8, opacity=1)  
            .set_pos(("right", "top"))) 

final = CompositeVideoClip([video, logo])

#Add subtitles
clip = VideoFileClip(video_path)

#Subtitle, text content, location, display duration
texpClip = TextClip(subtitle, fontsize=30, color='white').set_pos('bottom').set_duration(duration)

video = CompositeVideoClip([clip,texpClip.set_start(start)])

7. Transcoding

It should be noted that if you use AE to make animated video, the final video file will be very large, and ffmpeg dependent libraries can be used to quickly transcode and compress files.

Combined with the alias mentioned in the previous article, video clipping is more convenient.

#Fast Transcoding Compression
alias zh='ffmpeg -i source.mov -qscale 0 output.mp4'

Tags: Programming Python Lambda

Posted on Wed, 08 Jan 2020 20:07:22 -0800 by fluvly