select chapter

laiye rpa developer guide d1-博天堂登陆

data is an inevitable product of information development. collecting, collating, processing, and analyzing data is an indispensable part of the rpa process. this chapter takes the data processing sequence as the main line, and introduces the processes of data acquisition, data reading, data processing, and data storage. it covers different data formats such as web page data, application data, file data, json, and string, and also covers various data processing methods such as expressions, collections, and arrays.

web page data, because of its diversity and real-time nature, is the most common source of data acquisition. web data scraping is typically called "climbing data". laiye rpa integrates the "data capture" function, which can capture web data in real-time. since "data scraping" is commonly used, the "data scrape" function is located on the toolbar.

the "data capture" feature is another commonly used function. this button displays an interactive dialog box. this dialog box will guide the user to complete the web page data capture. laiye rpa currently supports data scraping for four programs: desktop program table, java table, sap table, and web page. here, we will use web page data scraping as an example.


figure 9: start fetching data-select target

click the "select target" button. the "select target" button is the same as the "select target" button in other target commands we learned earlier. it should be noted that laiye rpa will not automatically open the web pages and pages you want to crawl, so before data crawling, you need to open the data web page or desktop program table in advance. this work can be done manually or through other laiye rpa command combinations. for example, we will walk through how to capture mobile phone product information from a website. we use the "start new browser" command of "browser automation" to open the website on the browser, and use the "set element text" command in the search bar. then, we enter "mobile phone" and use the "click target" command to click the "search" button.

after the web page is ready, we need to locate the data in the web page. first, we capture the product name, and then carefully select the target of the product name (red frame and blue bottom mask frame).


figure 10: select the brand name

here, laiye rpa will display a message box: "please select the same level of data and capture it again". this is required because what we want to capture is batch data, so we must find the common characteristics from multiple batch data. after selecting the target for the first time, we will get a feature, but we still do not know which features are common to all targets and which are the characteristics of the first target. selecting and capturing another level of data will help laiye rpa determine the commonality of all the targets.


figure 11: tip: select the same level of data and re-capture

to locate the data for capture, we first capture the name of the product and carefully select the target of the product name (red frame and blue bottom mask frame).

once again, we need to locate the data that needs to be crawled on the webpage, that is, the name of the product. since the first crawl was the name of the first product, now we crawl the name of the second product. we must carefully select the target of the product name to ensure that the second and first crawls are the same level of target, because the web page level is sometimes particularly large, and the same text has several levels of target. laiye rpa will help you check this. if laiye rpa reports an error, your selection is most likely wrong. in addition, you can also choose third and fourth product names for crawling, which would not affect the data crawling results.


figure 12: select the brand name again

after both targets have been selected, laiye rpa will display a message box again, asking "would you like to capture the text or the text link?", which can be selected as needed.


figure 13: the captured data type.

after clicking the "ok" button, laiye rpa will display a preview of the data capture results. you can check whether the data capture results are consistent with your expectations. if they are not consistent, you can click the "previous" button to restart the data capture. if they are consistent, and you only want to capture the "product name" data, then click the "next" button. if you want to capture more data fields; for example, if you want to capture the product price, you can click "fetch more data" button. laiye rpa will pop up again to select the target interface.


figure 14: preview crawl results

this time we choose the text label of the commodity price.


figure 15: select commodity prices

similarly, after selecting the target twice and previewing the data crawl results again, you can see that the product name and product price have been successfully crawled.


figure 16: preview the crawl results again

we can reuse this method to increase the data items that we capture, such as the image address of the product or the number of reviews. if you do not need to capture more data items, then click the "next" button. the guidance page that appears at this time will ask "would you like to capture more data with the "next page" button?"

assuming that the web page data is regarded as a two-dimensional data table, the previously detailed step is to increase the number of columns in the data table, such as product name, price, etc., and capture the navigation button to increase the number of rows in the data table. if you only want to capture the first page of data, then click the "finish" button; if you need to capture the next few pages of data, then click the "capture page" button.


figure 17: crawl and turn pages

if you click the "capture navigation button", a "target selection" guide box will pop up. select navigation button on the web page, where the page navigation button is the ">" symbol button on the page.


figure 18: selecting the page turning button target

when all the steps are completed, you can see that laiye rpa has inserted a "data capture" command into the command assembly area, and all the attributes of the command have been filled in through the target selection wizard. for example, the content of the "target" attribute is:

{
  "html": {
    "attrmap": {
          "id": "j_goodslist",
          "tag": "div"
    },
  "index": 0,
  "tagname": "div"
},
"wnd": [{
     "app": "chrome",
     "cls": "chrome_widgetwin_1",
     "title": "*"
} , {
     "cls": "chrome_renderwidgethosthwnd",
     "title": "chrome legacy window"
  }]
}

certain attributes of the "data capture" command can be further modified: the "number of captured pages" attribute refers to how many pages of data are fetched; the "number of returned results" attribute limits how many results are returned per page (-1 means there is no limit to the number); the "page turning interval (ms)" attribute refers to how many milliseconds to turn the page (sometimes the network speed is slow, and it takes a longer interval to open the page completely).

in addition to web data crawling, "files" are another very important data source. laiye rpa provides the operation of several format files, including general files, ini format files, csv format files, etc. let us first look at general files.

in the "general file" directory of "file processing" in the command center, select and insert a "read file" command. the command has three attributes. one is the "file path" attribute, which is to fill in the path of the file to be read. here it is filled with @res"test.txt", which means the test.txt file under the res subdirectory of the process directory. the next is the "character set encoding" attribute. select "gbk encoding (ansi)" if most of the files contain chinese characters; selecting "utf-8 encoding" or "unicode encoding" would make the chinese garbled. the last is "output to" attribute: fill in a string variable sret, and the contents of the read file will be saved in this variable in the form of a string.


figure 19: reading the file

general files can only be read, or written to in units of files. if you need to perform more detailed operations on files, you can select specific file operation commands according to the file type, such as ini files or csv files.

the ini file is also called the initialization configuration file. most windows system programs use this file format, which is responsible for managing the configuration information of the program. the ini file format is relatively fixed, generally composed of multiple subsections. each subsection is composed of some configuration items, which are key-value pairs.

let’s look at the most classic ini file operation: "read key value". in the "ini format" directory of "file processing" in the command center, select and insert a "read key value" command. this command can read the value of the specified key under the specified section in the specified ini file. the command has five attributes. for the "configuration file" attribute, fill in the path of the ini file to be read. here it is filled with @res"test.ini", indicating that the test.ini file in the res subdirectory of the process directory has been read in. the content is as follows:

[meta]
name = mlib
description = math library
version = 1.0
[default]
libs=defaultlibs
cflags=defaultcflags
[user]
libs=userlibs
cflags=usercflags


fill in the search range of key-value pairs for the "section name" attribute, where "user" is filled in, to indicate that you want to search for key-value pairs in the [user] section. the "key name" attribute fills in the name of the "key" to be found. here it is "libs", indicating that you want to find the content after "libs=". the "default value" attribute refers to the default value returned when the key cannot be found. the "output to" attribute fills in a string variable sret, which will save the found key value.


figure 20: reading the ini file

add a command to "output debugging information" and print out sret. after running the process, you can see that the value of sret is "userlibs".

the csv file stores table data in plain text, and each line of the file is a data record. each data record consists of one or more fields, separated by commas. csv is widely used to exchange data table information between applications of different architectures to solve the interoperability problem of incompatible data formats.

in laiye rpa, you can use the "open csv file" command to read the contents of the csv file into a data table to better process the data. for the processing method of the data table, see the next section.

first look at the "open csv file" command, this command has two attributes. the "file path" attribute fills in the path of the csv file to be read. here it is filled with @res"test.csv", indicating that the process has been read and that the test.csv file is in the res subdirectory of the directory. fill in a data table object objdatatable in the "output to" attribute. after running the command, the content of the test.csv file will be read into the data table object objdatatable. we can add an "output debug information" command to view the contents of the objdatatable object.


figure 21: open csv file

let’s look at the "save csv file" command again. this command also has two attributes. the "data table object" attribute fills in the data table object objdatatable obtained in the previous step, and the "file path" attribute fills in the path to save the csv file. here it contains @ res"test2.csv", indicating that the data in the objdatatable data table object will be saved to the test2.csv file in the res subdirectory of the process directory.

after the data is read, the data must be processed. for different data formats, laiye rpa provides different data processing methods and commands, including general data tables, strings, collections, arrays, time, proprietary json, and regular expressions. the following describes these data processing methods.

the data table is a two-dimensional data table that uses memory space to store and process data. compared with the files stored on the hard disk, the advantage of memory is that the data processing speed is tens or hundreds of times faster, but the memory space is relatively small. therefore, the general processing flow is: 1. read the data to be processed into the memory and store it in the form of a data table; 2. process the data table in the memory; 3. after the processing is completed, transfer the data to the hard disk again; 4. process the next batch of data. in this way, the data processing speed can be greatly accelerated without being limited by the data space.

first look at how to build a data table. in the "data table" directory of the "file processing" in the command center, select and insert a "build data table" command. this command can generate a data table from the table header and the construction data. the command has three attributes. the "table column header" attribute fills in the table header of the data table; here it contains ["name", "subject", "score"]. to construct the "data attribute", fill in the data in the data table, here it is filled in with [["zhang san", "chinese", "78"],["zhang san", "english", "81"] ,["zhang san", "math", "75"],["li si", "chinese", "88"],["li si", "english", "84"],["li si", "math", "65"]].


figure 22: building the data table

in this way, the data table is constructed and stored in the variable objdatatable filled in the "output to" attribute, as shown below:

name

subject

marks

zhang san

chinese

78

zhang san

english

81

zhang san

math

75

li si

chinese

88

li si

english

84

li si

math

65


after the data table is constructed, various data operations such as reading, sorting, and filtering can be performed based on the data table. first look at the sorting operation of the data. the "data table sorting" command has four attributes. the "data table" attribute fills in the data table to be sorted, and here it contains data table object objdatatable obtained in the previous step. the "column sorting" attribute indicates which column to sort, and here it is filled in with "subject". the "ascending sorting" attribute refers to the sorting method: "yes" means ascending order, "no" means descending order.


figure 23: data table sorting

the "output to" attribute fills in the sorted data table object, here it is still filled with objdatatable. use the "output debugging information" command to view the sorted data table as follows:

serial number

name

subject

marks

2

zhang san

chinese

78

5

zhang san

english

81

1

zhang san

math

75

4

li si

chinese

88

0

li si

english

84

3

li si

math

65


let's look at the data screening. the "data filtering" command has four attributes. the "data table" attribute fills in the data table to be filtered, and here it contains the data table object objdatatable obtained in the previous step. the "filter criteria" attribute refers to the data that meets the criteria. click the "more" button on the right side of the property bar, and the "filter criteria" input box will pop up. the filtering conditions include a combination of "column", "criteria", and "value", such as "subject== 'language'", which means that all the subject data is idiomatic. we can add screening conditions, and the relationship between multiple screening conditions is "and" or "or".


figure 24: data filtering


figure 25: data filtering conditions

use the "output debugging information" command to view the filtered data table as follows:

serial number

name

subject

marks

0

zhang san

chinese

78

3

li si

chinese

88


json is a lightweight data exchange format for storing and exchanging text information. json is easy for humans to read and write and for machines to parse and generate. json is similar to xml in usage, but smaller, faster, and easier to parse than xml.

there are two json commands in laiye rpa. one is "convert json string to data" and the other is "convert data to json string". the data here actually refers to data in dictionary format. in other words, json objects are equivalent to dictionary formats. conversion between json strings and json objects, plus some dictionary operations, can complete all data processing operations in json format.

let's first look at the "json string to data" command. this command can convert a json string into a json object. the command has two attributes: "convert object" attribute, fill in the json string. here is filled in '{ "name": "zhang san", "age": "26"}'. it is important to note that in the past, when filling in a string, the default was to use double quotes "" as the start and end symbols, but here single quotes '' are the default start and end symbols. this is because the json string is the attribute, and the "key name" of the json string is double quotation marks "" as the start and end symbols, and the whole json string uses single quotation marks as the start and end symbols;


figure 26: json string converted to data

the "output to" attribute is the converted json object, and here contains objjson. use the "output debugging information" command to print the json object, the output result: {"name": "zhang san", "age": "26" }. you may be confused, since it seems that there is no difference between the json string and the json object. well, they are in fact very different. they look similar, but one is a string and the other is an object. let's take a look at the operation methods of json objects.

add an "output debugging information" command. this command prints the value of objjson["name"]. the result after running is "zhang san", indicating that the data in the json object can be accessed in the form of square brackets.

traceprint (objjson ["name"])


since it can be accessed, it should also be modifiable. add an assignment statement that changes the "age" of objjson to 30.

objjson ["age"]= "30"


finally, through the "data to json string" command, the modified json object is converted to a string. this command has two attributes. the "convert object" attribute fills in the json object to be converted, which is currently the objjson that has been used before. the "output to" attribute fills in a string variable, which will save the converted json character string. use the output debugging information command to view the converted json string: "{ "name": "zhang san", "age": "30" }", you can see that the content of the json object was successfully modified.

strings are the most common data type in the system, and string operations are the most common data operations. being proficient in string operations will greatly benefit subsequent development. let's first look at a most classic command: "find string". this command will find whether the specified character exists in the string. the command has five attributes. the "target string" attribute is filled in the searched string; here it is "abcdefghijklmn". the "find content" attribute is filled in to be searched, and the specified character is filled in with "cd". the "start search position" attribute refers to the position from which the search starts; the starting position is 1. the "case sensitive" attribute refers to whether the search is case sensitive, where the default is "no". the "output to" attribute fills in a variable iret, which stores the character position found. run the command and print the variable iret. if the output is 3, it indicates that "cd" appears in the third position of "abcdefghijklmn". if the search string does not exist, it will return 0.


figure 27: find string

let's look at a classic string operation: "split character" command. this command uses a specific separator to split the string into an array. this command can be used to process csv format files. the command has three attributes. the "target string" attribute fills in the string to be split, and it is "zhangsan|lisi|wangwu" here. the "separator" attribute fills in the symbol used to split the string, here filled with "|". the "output to" attribute saves the split string array to arrret. add the "output debugging information" command, and print the variable arrret. the result is ["zhangsan", "lisi", "wangwu" ], indicating that the string "zhangsan|lisi|wangwu" is divided into string arrays by the separator "|" "zhangsan", "lisi", "wangwu" ].


figure 28: split character

when writing a string processing process, it is often necessary to test whether a string conforms to certain specific complex rules. regular expressions are tools used to describe these complex rules. laiye rpa can search and test a large amount of data, which is useful for data collection and web crawlers, for example.

let's first look at the "regular expression search test" command, which tries to use regular expressions to look up strings, returning true if it finds them and false if it doesn't. it is used to determine whether a string satisfies a certain condition. this command has three attributes. the "target string" attribute fills in the character string to be tested, the "regular expression" attribute fills in the regular expression, and the "output to" attribute saves the test result. for example, if a website had to determine whether a registered username is legal or not, it will first write the judgment condition of the legal username as a regular expression, and then uses the regular expression to test whether the string entered by the user meets the condition. specifically, the "regular expression" attribute is filled with "ˆ[a-za-z0-9_-]{4,16}$", which means that the registration name is 4 to 16 bits, and the characters can be upper and lower case letters, numbers, underlines, and dashes. if "abc_def" is filled in the "target string", the test result would return true, indicating that "abc_def" conforms to the regular expression. if "abc" or "abcde@" is filled in the "target string", the test result would return false, because the length of "abc" is 3 and "abcde@" contains the character "@", which is not allowed under the regular expression rules.


figure 29: regular expression search test

let's look at the "regular expression search" command. this command uses regular expressions to search for strings and find all the strings that meet the conditions. the command has three attributes. the "target string" attribute fills in the string to be searched, the "regular expression" attribute fills in the regular expression, and the "output to" attribute saves the search result. for example, the "target string" attribute fills in a section of a web page that the web crawler crawls back to, as shown below:

'

 

'


fill in "


figure 30: regular expression search

for a detailed tutorial on regular expressions, see the online tutorial.

the "array" command mainly completes the functions of array editing (adding elements, deleting elements, intercepting and merging arrays), obtaining array information (length, subscript, etc.). in the "array" directory of data processing in the command center, select and insert an "add element at the end of the array" command, which adds an element at the end of the array. this command has three attributes. the "target array" attribute fills in the array before adding elements. here, it is filled in with ["1", "2"]. the "add element" attribute fills in the elements to be added, and currently is filled in with "3". the "output to" attribute holds the added array variable and prints it with the expected output of ["1", "2", "3"].


figure 31: adding elements at the end of the array

let's look at the "filter array data" command again. this command can quickly filter the elements in the array, leaving or removing the elements that meet the conditions. the command has four attributes. the "target array" attribute fills in the array to be filtered, and here it is filled with ["12", "23", "34"]. the "filter content" attribute fills the conditions to filter the array, currently filled with "2", which means that the array element meets the condition as long as it contains "2". the "retain filter text" property has two options: "yes" means that the array element that meets the condition will be retained, excluding elements that do not meet the condition; "no" indicates that the elements of the array that meet the conditions will be removed, and the elements that do not meet the conditions are retained. the "output to" attribute holds the processed array arrret.

if you select "yes" to print the filtered array variable arrret, the output result is ["12", "23"]. array elements containing the string "2" are retained. if you select "no" to retain the filter text attribute and print the filtered array variable arrret, the output result is ["34"]. array elements containing the "2" string are removed.


figure 32: filtering array data

the mathematical operation commands are located in the "mathematics" directory of the "data processing" in the command center, and they include various mathematical operations. these commands are relatively independent, so only one explanation is selected here. the other commands are used in a similar manner and will not be repeated here.

select and insert a "round value" command, which can round the number. there are three attributes of this command. the "target data" attribute fills in the number that needs to be rounded, the "reserved decimal places" attribute fills in the number of decimal places reserved, and the "output to" attribute saves the rounded result.


figure 33: rounded value


time operation commands mainly include the conversion between time and character strings, and the operation of time objects. first let's look at how to get the current time. in the "time" directory of the "data processing" in the command center, select and insert a command to get the time. this command can get the number of days elapsed from january 1, 1900 to now. this command has only one "output to" attribute, which saves the current time. here it contains "dtime". after running the process, the "output debugging information" command prints debugging information: 43771.843969907. this indicates that from january 1, 1900 to now, 43771.843969907 days have passed. you can roughly estimate whether it is correct.

after getting the time variable, you can use the "format time" command to convert the time variable into a string of various formats. the "format time" command has three attributes. the "time" attribute fills in the time variable just obtained dtime, the "format" attribute fills in time format, where year (yyyy) occupies 4 digits, and month (mm), day (dd), hours (hh), minutes (mm), and seconds (ss) all occupy 2 digits. for example, "yyyy-mm-dd hh:mm:ss" is converted into: "2019-11-02 20:29:58". the "output to" attribute saves the result of formatting time.


figure 34: format time

in addition to converting time variables into strings in various formats, you can also directly obtain an item of time variables. for example, you can use the "get month" command to get the month of the time variable dtime. the other commands are similar.

collection operation commands mainly include the creation, addition and deletion of collection elements, and operations between collections. first let's look at creating collections. in the "collection" directory of "data processing" in the command center, select and insert a "create collection" command. this command has only one "output to" attribute, and it assigns the result of creating the collection to the objset object.

next, we write elements to the objset collection and insert an "add element to collection" command. this command has two properties. the "set" attribute fills in the collection object objset created in the previous step, and the "add element" property fills in the collection elements, which can be constants such as numbers, strings, or variables.

can both numeric and string elements appear in the same collection? the answer is yes! we can call the "add element to collection" command twice, inserting "1" once and "2" once. print debugging information after running, you can see that both elements are successfully inserted into the collection.

finally, let's look at the operations between multiple collections, taking the union of collections as an example. two sets are constructed by inserting elements, one is {1, "2"} and the other is {"1", "2"}. add a "take union" command. this command has three attributes. the "set" attribute and "comparison set" fill in two sets that need to be merged separately. the "output to" attribute fills in the set variables after the merge. print the debugging information after running, you can see that the set becomes {1, "1", "2"} after the merge, which means that the union eliminates the duplicate element "2", 1 and "1" are not duplicate elements, so they are selected at the same time. into the union, the key source code is as follows:

objset=set. create()
set.add(objset,1)
set.add(objset, "2")
traceprint (objset)
objset2=set. create()
set.add(objset2, "1")
set.add(objset2, "2")
objsetret = set.union (objset,objset2)
traceprint (objsetret)


网站地图