QQ game fault finding script

Preface

Bi set interferes with the progress of the previous note taking. In his spare time, he makes fun of himself by finding fault, but sometimes he finds five differences and only finds two or three, which makes him feel very uncomfortable. So we made an assistant for QQ game hall – > let's find fault together. The notes of this blog are composed of functions and packages.

thinking

Interface – > screen capture – > comparison to find out the difference – > automatic mouse click

I. interface

Using the latest pyqt5, the key point of interface design: small.

class window(QMainWindow):
def __init__(self):
    super().__init__()
    self.resize(300,50)
    self.move(100,100)
    self.setWindowTitle('Watch cheats in a row')
    #text
    self.text = QTextEdit(self)
    self.text.resize(150,25)
    self.text.setText('process information')
    #button start
    self.button = QPushButton('start',self)
    self.button.clicked.connect(self.grabphoto)
    self.button.resize(150,25)
    self.button.move(150,0)
    #button stop
    self.button1 = QPushButton('Stop it',self)
    self.button1.clicked.connect(self.stop)
    self.button1.resize(150,25)
    self.button1.move(150,25)
    self.button.setShortcut('CTRL+C')

    #QComboBox
    self.comboBox_1 = QComboBox(self)
    self.comboBox_1.resize(150,25)
    self.comboBox_1.move(0,25)
    self.comboBox_1.addItem("--Please choose--")
    self.comboBox_1.addItem("For tips only")
    self.comboBox_1.addItem("Machine violence model")
    self.comboBox_1.currentText()


The read currentText and Index of combox can be used directly.
To set the shortcut key, you need to focus the mouse on the interface

2, Screen capture

Although there is a screenshot method to obtain window information through handle, considering that it needs to be operated when the recognition effect of post order is not good, so it is impossible to use the method of background hang up.
Imagegrab screenshots have a long time interval and are not recommended.
Adopt the way of fast screen capture: pyautogui.screenshot in PIL library.

        img1 = pyautogui.screenshot(region=[541,468,380,285])
        img2 = pyautogui.screenshot(region=[x,y,length,height])

Note: the truncated graph needs to be converted to an array for processing.

3, Picture comparison

This is the key point. Did you simply think that subtraction of image pixels is the key to success?
However, it was found that the images provided by the game were slightly different in brightness and tone, and after subtraction, they presented very strange colors.
Until I found out on a blog: the images are reversed and the difference is very obvious.

img3 =ImageChops.invert(img2)
img4 = Image.blend(img1,img3,0.5)
img5 = np.array(img4)
(Network source code)


This picture is one of the basic functions of the auxiliary device. The code and logic are so simple that I can't believe it, but it must be supplemented with the automatic click function to be more complete.
canny edge extraction -- > closed operation – > findresources – > meet the requirements? Change closed operation parameters Until the requirements are met or the limit is exceeded.

   img6 = cv2.Canny(img5,90,150)
       kernel = np.ones((20,20),np.uint8)
       closing = cv2.morphologyEx(img6,cv2.MORPH_CLOSE,kernel)
       self.h = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
       a = 18
       while len(self.h[0]) > 10  and   a > 1 :
           kernel = np.ones((a, a), np.uint8)
           closing = cv2.morphologyEx(img6, cv2.MORPH_CLOSE, kernel)
           self.h = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
           self.text.setText(f'The first{18-a}Secondary processing')
           a -= 1

   counters = self.h[0]
   #Five outlines
   xall = yall = 0
   if len(counters) < 15:
       for i1 in range(len(counters)):
           for i2 in range(len(counters[i1])):
               xall += counters[i1][i2][0][0]
               yall += counters[i1][i2][0][1]
           a = xall/len(counters[i1]) + 541
           b = yall/len(counters[i1]) + 468
           if self.comboBox_1.currentIndex() == 2:
               pyautogui.click(a,b,interval=0.5,clicks=1,button='left')
           xall = yall = 0

   cv2.imshow('heibai',closing)
   cv2.imshow('caise', img5)

Findresources returns three parameters in opencv3, of which the first contour point is what I need most.

counters = h[0] first set of contour points
 counters[1] information about the first point of the first profile
 counters[1][0][0] abscissa of the first point of the first profile

The above method is extracted through the contour, so the middle point of the point set (the point that the mouse needs to click) should include the mean point.
Simulated mouse click

pyautogui.click(a,b,interval=0.5,clicks=1,button='left')

I should also pay attention to the fact that if too many contours are identified, the control of the mouse will be lost (preprocessing is not appropriate), which should be limited.

summary

When the difference is very close, the expansion may be connected, the identified points will be less than five, and the challenge can only be completed through the naked eye + auxiliary image.

Package pyinstaller

pip install pyinstaller

Command line run:

pyinstaller --version

Verify that the installation was successful.
The reason why I made a mistake is that I only deleted the 3.6.6 compiler in pycharm, but not in the working path path, so that every time I package to find a dependency library, I made a wrong addressing.

My computer – properties – left, advanced system settings – advanced – environment variable – Path – move up and down to change Path priority – add compiler, compiler library Path.

Packing operation (the first article of Baidu pyinstaller detailed parameter deployment):
cmd – go to the path of main.py – pyinstaller -F -w main.py – dist. exe needs the same working environment as mian.py (the relative path of pictures and music is the same). Just move up one level.

Source code

from PyQt5.Qt import *
import cv2
import pyautogui
import sys
import numpy as np
from PIL import ImageChops,Image


class window(QMainWindow):
    def __init__(self):
        super().__init__()
        self.resize(300,50)
        self.move(100,100)
        self.setWindowTitle('Watch cheats in a row')
        #text
        self.text = QTextEdit(self)
        self.text.resize(150,25)
        self.text.setText('process information')
        #button start
        self.button = QPushButton('start',self)
        self.button.clicked.connect(self.grabphoto)
        self.button.resize(150,25)
        self.button.move(150,0)
        #button stop
        self.button1 = QPushButton('Stop it',self)
        self.button1.clicked.connect(self.stop)
        self.button1.resize(150,25)
        self.button1.move(150,25)
        self.button.setShortcut('CTRL+C')

        #QComboBox
        self.comboBox_1 = QComboBox(self)
        self.comboBox_1.resize(150,25)
        self.comboBox_1.move(0,25)
        self.comboBox_1.addItem("--Please choose--")
        self.comboBox_1.addItem("For tips only")
        self.comboBox_1.addItem("Machine violence model")
        self.comboBox_1.currentText()

# LU(541,468)RU(921,468) LD(541,753)RD(921,753) LU(998,468) RU(1378,468) (998,753) (1378,753)
#         img1 = pyautogui.screenshot(region=[541,468,380,285])
#         img2 = pyautogui.screenshot(region=[998, 468, 380, 285])
#         img3 =ImageChops.invert(img2)
#         Image.blend(img1,img3,0.5).show()
    def grabphoto(self):
        self.text.setText('Processing in progress')
        # img1 = pyautogui.screenshot(region=[541,468,380,285])
        # img2 = pyautogui.screenshot(region=[998,468,380,285])
        img1 = Image.open('1.png')
        img2 = Image.open('2.png')


        img3 =ImageChops.invert(img2)
        img4 = Image.blend(img1,img3,0.5)
        img5 = np.array(img4)



        #Expansion corrosion method
        '''
        kernel = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]], np.float32)  # Define a core
        dst = cv2.filter2D(img5, -1, kernel=kernel)
        dst = cv2.filter2D(dst, -1, kernel=kernel)
        cv2.imshow('1',dst)
        img3 =ImageChops.invert(img2)
        img4 = Image.blend(img1,img3,0.5)
        img5 = np.array(img4)
        kernel = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]], np.float32)  # Define a core
        dst = cv2.filter2D(img5, -1, kernel=kernel)
        dst = cv2.filter2D(dst, -1, kernel=kernel)
        cv2.imshow('1',dst)

        gray = cv2.cvtColor(dst, cv2.COLOR_RGB2GRAY)  # Graying the input image
        cv2.imshow("binary2", gray)
        gray = np.array(gray)
        newgray = np.array(gray)
        allgray = gray.sum()/(380*285)
        for i in range(380):
            for j in range(285):
                if gray[j][i] < allgray+50:
                    newgray[j][i] = 0
                else:
                    newgray[j][i] = 255
        cv2.imshow("contours", newgray)
        kernel = np.ones((2,2),np.uint8)
        closing = cv2.morphologyEx(newgray,cv2.MORPH_CLOSE,kernel)

        kernel1 = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2))
        opening = cv2.morphologyEx(closing, cv2.MORPH_OPEN, kernel1)

        kernel = np.ones((15,15),np.uint8)
        closing = cv2.morphologyEx(opening,cv2.MORPH_CLOSE,kernel)


        h = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
        contours = h[0]
        i = 5
        while len(contours) > 5 and i >0:
            kernel1 = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2))
            opening = cv2.morphologyEx(closing, cv2.MORPH_OPEN, kernel1)
            print(i)
            i-=1

        xall = yall = 0
        for i1 in range(len(contours)):
            for i2 in range(len(contours[i1])):
                xall += contours[i1][i2][0][0]
                yall += contours[i1][i2][0][1]
            a = xall/len(contours[i1]) + 541
            b = yall/len(contours[i1]) + 468
            if self.comboBox_1.currentIndex() == 2:
                pyautogui.click(a,b,interval=0.5,clicks=1,button='left')
            xall = yall = 0
        '''

        img6 = cv2.Canny(img5,90,150)
        kernel = np.ones((20,20),np.uint8)
        closing = cv2.morphologyEx(img6,cv2.MORPH_CLOSE,kernel)
        self.h = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
        a = 18
        while len(self.h[0]) > 10  and   a > 1 :
            kernel = np.ones((a, a), np.uint8)
            closing = cv2.morphologyEx(img6, cv2.MORPH_CLOSE, kernel)
            self.h = cv2.findContours(closing, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
            self.text.setText(f'The first{18-a}Secondary processing')
            a -= 1

        counters = self.h[0]
        #Five outlines
        xall = yall = 0
        if len(counters) < 15:
            for i1 in range(len(counters)):
                for i2 in range(len(counters[i1])):
                    xall += counters[i1][i2][0][0]
                    yall += counters[i1][i2][0][1]
                a = xall/len(counters[i1]) + 541
                b = yall/len(counters[i1]) + 468
                if self.comboBox_1.currentIndex() == 2:
                    pyautogui.click(a,b,interval=0.5,clicks=1,button='left')
                xall = yall = 0

        cv2.imshow('heibai',closing)
        cv2.imshow('caise', img5)



    def stop(self):
        pass




if __name__ == '__main__':
    app = QApplication(sys.argv)
    a = window()
    a.show()
    sys.exit(app.exec_())
Published 9 original articles, won praise 0, visited 2109
Private letter follow

Tags: network less pip Pycharm

Posted on Wed, 12 Feb 2020 04:42:48 -0800 by articlesocial